Get started with Ethereum development

This is kinda part of the Maneki-Neko series, as it explains the basic setup to get stared with Ethereum development.

Ethereum is a block-chain based ledger system and an crypto-currency. I want to build a small application where you can send some ether to a wallet and the cat will wave or do other things (that i’ll need to teach her).

The first thing is to get an ethereum client running and let it connect to a test blockchain.

Note: I’m running Linux Mint Sylvia (an Ubuntu derivate) and everything is based on that

1. Install and sync Parity to the TestNet

First we install the Ethereum Client partiy from https://parity.io

  1. Go to https://parity.io and download the dpkg file
  2. Install with sudo dpkg -i parity_1.10.4_ubuntu_amd64.deb (You version might vary, tho)
  3. Start parity on the kovan chain (https://github.com/kovan-testnet/config) with parity --chain=kovan and wait for it to sync. It’s finished when you see Imported #<blocknum> messages in the terminal

2. Create a wallet/account on the kovan chain

  1. Run parity --chain=kovan account new
  2. Give the account an password
  3. It will spit out your Ethereum address on the kovan blockchain

You can always lookup your accounts using parity --chain=kovan account list

(Note: parity also ships with an ui. you can start it by running parity --chain=kovan ui)

3. Get some ether to play with

Now we need some money to either spend on GAS (you need this to pay the network for runnig your smart-contract code) or to send to other accounts on the test network.

To get some for free we can use a Kovan Faucet service.

A list of services can be found here https://github.com/kovan-testnet/faucet

If you already have some ether on the “real” blockchain aka mainnet you can SMS verify your mainnet account and get some ether automatically this way.

But here on this blog we’re cheap and want to avoid spending money when possible, so we will use the gitter faucet.

For that you need our account address on the kovan chain. See above if you forgot how to get it.

With that we go to https://gitter.im/kovan-testnet/faucet and login with our github account (Click “Sign in to start talking”).
After authentication we choose “Join room” and copy’n’paste our account address and send it to the room. Now we watch the chat and see if our address gets some funds.

after you see your address start parity with the ui flag (parity --chain=kovan ui), navigate to the Accounts tab and you should have 5.00 KETH in your dev account to play with.

Maneki-neko with WiFi (Part 1)

Sometimes i just want to build something stupid. And today i had a tought about something really stupid and pointless.

A Maneki-Neko with Wifi and Bluetooth.

Those adorable japanese cats, that constantly wave with their paw to welcome people and give them luck and happiness.

The Maneki neko

The idea i had is simply use a Pi Zero, connect a GPIO to the arm’s drive mechanism and hook it up to the net somehow.
To use the Pi that way i had to make sure the arm wouldn’t draw to much current, so i could hook it up directly to the GPIO.

But the cat is supposed to operate for weeks, even months on a single AA battery, so can’t be that much!

Catfood measurements

Confirmed: The cats draws 1.5mA max. That should work.

Getting the Pi Zero ready

The next steps are totally non-cat related and focus on getting the RPi to some kind of headless operation.

My goals are:

  • Headless boot
  • SSH over WiFi
  • Configure the WiFi via Bluetooth from an React Native app

I have an older Pi Zero without the on-board wifi, so i directly ordered the Pi Zero W that will go into the cat later on.
But for the next days a USB Wifi adapter and a USB BT dongle have to do.

I used the latest Raspian image and then (with keyboard and monitor hooked up) went through the basic setup:

  1. Set a hostname
  2. Setup the SSH server
  3. Manually setting up Wifi so i can work on the pi

After that i was ready to go to tackle the Bluetooth stuff

Make the Pi listen on Bluetooth

First i had to update the Bluetooth systemd file at /lib/systemd/system/bluetooth.service and add/change the ExecStart:

1
2
ExecStart=/usr/lib/bluetooth/bluetoothd -C
ExecStartPost=/usr/bin/sdptool add SP

Then i created a new service for rfcomm. Rfcomm will listen on a bt channel on the dongle and starts a terminal. So i kept it basic:

1
2
3
4
5
6
7
8
9
10
[Unit]
Description=RFCOMM service
After=bluetooth.service
Requires=bluetooth.service

[Service]
ExecStart=/usr/bin/rfcomm watch hci0 1 getty rfcomm0 115200 vt100 -a pi

[Install]
WantedBy=multi-user.target

and saved it as /etc/systemd/system/rfcomm.service. That should wait for connections on channel 1 and will open a shell on that channel. Just like a TTY terminal.

Connected from my phone to the Zeros Bluetooth TTY

The next step would be to write a small program that will get started instead of the bash so we can do the wifi configuration.

Customizing the bluetooth ‘protocol’

Now i want to replace the normal shell session with my own program.
So i put a very basic python program together that reposnds with MANEKINEKO and echos the input from the client:

1
2
3
4
5
6
#!/usr/bin/python
from sys import stdin

print "MANEKINEKO"
input = stdin.readline()
print "ECHO: ", input

I put that into /home/pi/wifi-setup.py and made it executable (chmod 755 /home/pi/wifi-setup.py). To have it run after a client connects via bluetooth i just changed the getty command in /etc/systemd/system/rfcomm.service and replaced the -a pi with -ni -l /home/pi/wifi-setup.py which directly runs the command as root. I know, i know, but eh! (at least for now :-P )

After an systemctl daemon-reload && service rfcomm restart i reconnected and my python script answered me!

HOORAY!!

Thats it for part 1. Part 2 will probably cover the development of a very small React Native app that will talk via Bluetooth with the Pi Zero to set it up.

How this blog auto-deploys

This blog is made with Hexo, a simple npm-based blog framework. Up to now (the whole 2 weeks i run this blog now) i had a local publish script that generated the static output of hexo, build the docker container locally and pushed it to the gitlab docker registry for that project. Then i had to navigate my browser to my rancher admin gui and manually upgrade the service for the blog with the new docker image. Way too much effort! So i went and put my automation hat on.

The goal was that everytime i push/merge to master the blog should be generated and published.

So what is my starting point:

  • My GitLab also runs on docker including the CI/CD runners, so the build process itself has to be docker based
  • I run everything in Rancher 1.x so it needs to deploy a rancher stack

Setting up the gitlab build

The first order of business is to get the hexo blog build on the gitlab ci. For that i added a docker-compose file named docker-compose.build.yml with the following content:

1
2
3
4
5
6
7
8
9
version: '3'

services:
ci-build:
image: node
volumes:
- .:/src
working_dir: /src
command: /bin/bash -c "npm install && node_modules/.bin/hexo generate"

Here we simply take the official node docker image, map our project directory into the container, install hexo (registered in the package.json) through npm and run it. As the project directory is mapped the generated content will be placed in our project directory.

Next we need to instuct the GitLab CI to run this docker compose file. As the runner itself is running in docker we do something named Docker-in-Docker or dind.

here the full .gitlab-ci.yml we need to control the GitLab CI:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
image: markuskeil/docker-and-rancher:latest

# When using dind, it's wise to use the overlayfs driver for
# improved performance.
variables:
DOCKER_DRIVER: overlay2
COMMIT_IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
RELEASE_IMAGE_TAG: $CI_REGISTRY_IMAGE:latest

services:
- docker:dind

before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY

build:
stage: build
tags:
- docker
script:
- docker-compose -f ./docker-compose.build.yml run ci-build
- docker build -t $COMMIT_IMAGE_TAG .
- docker push $COMMIT_IMAGE_TAG
- docker tag $COMMIT_IMAGE_TAG $RELEASE_IMAGE_TAG
- docker push $RELEASE_IMAGE_TAG

deploy:
stage: deploy
tags:
- docker
dependencies:
- build
only:
- master
script:
- rancher-compose -p keil-connect-blog up -p -d -c --force-upgrade

First we define the docker image that gitlab should use to run our build and deploy scripts in. It’s an image i created with inspiration from the https://hub.docker.com/r/jonaskello/docker-and-compose/ image. I used the same commands from Jonas’ Dockerfile and added the rancher-compose tools to it.
That way we have a meta-image that can handle all our docker build and rancher deploy tasks.

The build stage does all the heavy lifting and run everytime we push to the gitlab repo. It runs our hexo build compose file and then uses the Dockerfile to put the generated content into an nginx based image. The resulting images gets pushed to gitlabs docker registry.

If i push to master it runs the deploy stage which then uses the docker-compose.yml below to create/update the keil-connect-blog stack on my rancher instance. The rancher host and access keys are delivered to the environment by gitlabs Secret Variables feature.

1
2
3
4
5
6
7
8
version: '2'
services:
blog:
image: docker.keil-connect.com/websites/www.keil-connect.com:latest
stdin_open: true
tty: true
labels:
io.rancher.container.pull_image: always

That compose file is super minimalistic. Basically just the image from the gitlab registry and a couple of options.

After the first deploy of the rancher stack i manually wired the generated blog service to the loadbalancer. That wiring keeps working as long as i keep the service and stack name the same.

And thats it. Now i can easily bring you new posts without fiddeling with docker or rancher and this post is the first one deployed this way! Yay!

Building a cheepish Robot

Part of raising kids is to get them educated in things they don’t learn in school. One thing i want to teach is how all the tech around them is working and empower them to build their own.

For that i bought a reasonable cheap robot kit based on an Arduino from Amazon LSM2 Smart Robot Car Kit for around 65 Euros.

The assembled robot

It was clear that the kit won’t be one of the polished ones with kid-friendly instructions in a book. But i’m ok with that. So i have more room to explain stuff to my kids. But i didn’t expect how crude the instructions would be.

For start there only was a CD-R in there, not printed instructions what so ever. Ok i popped in the CD and only found a couple of folders and in one a couple of images that crudly explained how to assemble the basic hardware.

One of the instruction pages

But after an hour we got the Robot assembled and there where steps where the small fingers from my son really helped.

We used the simplest hardware setup to get the bot moving, which is the “IR Remote Experiment” where you only connect the IR sensor to the Arduino and upload the .ino file. And it worked out of the box! The motors are strong and it gained quite some speed and the wheels are big enough to get over bumps. Only the support wheel is a bit too stiff and might need a suspension mod to have it running over rough ground for longer times.

But wait! We then wanted to switch into autonomous mode and activate the ultrasonic sensors to get the bot self-driving and avoiding obstacles, but the cables that came with it where only male-female or male-male. We would have needed female-female once to get the sensor and the servo hooked up. And they wheren’t in the box. What a shame. So back to amazon to order some.

So the verdict: For that kind of money you get a decend amount of stuff, even when its not all you need. Even with its crude setup and instructions the examples work out of the box and also the code is not that bad, if you know your way around the Arduino ecosystem. So if you already have Arduino experience and what to tip your toe’s or the ones from your kids into robots it’s a good kit. (And maybe order a couple of cables along with it)

Cloud Backups - An hackers approach

Since the dawn of time - or at least since we use computers with broadband connections - we want to backup all our files to computers all around the world, so we know they are safe from natural desasters, the agencies or just the cup of coffee that you will spill over your laptop.

Now with the rise of all the cloud providers (Amazon AWS, Azure and Google) online backup space became cheap and online backup services came to live. And they are totally fine if you only have a couple of gigabytes to safe or only have a single computer. This is enough for most of the population and mostly free of charge or below 10 bucks a month.

BUT as a hacker and computer nerd you most certainly have at least a dozen machines under your control and a metric shitload of data you wanna back up. This is where i was a couple of days ago.

What i want

So what is it that i want from my backup solution:

  • Store unlimited amounts of data (or at least ~10TB to begin with)
  • Have it strongly encrypted with a key only i have, so only i can decrypt it
  • It has to be cheap - free would be great of course :-)

What the internet provides

After a bit of digging, the cheapest storage you can get is Amazon Glacier for 0.004 USD/GB/Month.
The API is a bit cumbersome and the AWS console is not one of the best interfaces to work with but eh.

Let’s do the math and see if it is viable:

10000 GB * 0.004 USD = 40 USD/Month

Meh! 40 Bucks is too much for me. There has to be a better and cheaper way!

Other online backup solutions that provide unlimited storage often only backup a single windows-pc or mac so thats not a nice solution either.

Back to the early hacker days

I rememberd that i have an usenet account lying around. Yes that Usenet! Where the old graybeards on their Unix terminals went to discuss if vi or emacs is the best editor.

The usenet is still with us and there are quite a couple providers out there that not only store the editor-flamewars but also tons of binary articles. These articles are just files stored on those servers. From cat pictures to tentacle porn. And don’t forget they all sync the data between them, spread over the entire world on dozens of servers.

And all that for ~15Eur/Month at my provider.

So why not just use that incredibly source of disc-space and bandwith for our purposes? So let’s try it!

The Backup Process

To store our backups securly to the Usenet we need to do a bit more than just tar them up and throwing them onto the usenet:

  1. We’ll pack them with tar and compress using 7-Zip

    1
    $ tar --xattrs -cpv - <your-directory-to-backup> | 7z a -si backup-file.tar.7z

    That tar command will also safe your file permissions and extended attributes.

  2. Encrypt the backup using GnuPG
    You have to generate an GnuPG key first using gnupg --gen-key if you don’t have one already and then encrypt the tar file.

    1
    $ gpg -r <your-key-id> -e backup-file.tar.7z

    You’ll get a new file named backup-file.tar.7z.gpg.

  3. Checksum that encrypted file and write down the hash

    1
    sha256sum backup-file.tar.7z.gpg|head -c 64

    We’ll use the hash as basename for our final files. So you can easily find them on the usenet but nobody else has a clue what they contain.
    I have prefixed mine with a little three letter code like abc_, but only the SHA should be enough.

  4. Split the tar file into chunks of 50MB each

    1
    $ split -b 50M backup-file.tar.7z.gpg abc_<CHECKSUM>.part
  5. Generate checksums for all files
    Now we want to store the checksums for all the files and save them in case we need to restore, so we can verify that we got them back correctly from the usenet and also if we have stiched the backup back together correctly. SHA512 might be overkill here but eh. We have it, so we gonna use it.

    1
    $ sha512sum * >> backup_summary.txt
  6. Create .par2 files for reparing potentially bad blocks when we download the backup form the usenet
    PAR files are common amongst usenet users and most binary usenet clients have built-in support so they will verify and fix your files automatically after download.

    1
    $ par2create -r10 -n7 abc_<CHECKSUM> abc_*

    This will give us a 10% redundancy, so up to 10% of the downloaded files could be bad blocks and we’re still able to recover.

  7. Push them out the door and onto the usenet
    Now it gets juicy. We finally have verything together to push the backup to the usenet. For that we use Nyuu, a fast usenet upload client written in node.js.

    1
    $ nyuu -h <your-usenet-server> -P <server-port> -S -u <username> -p <password> -s "abc_<CHECKSUM>" -o "abc_<CHECKSUM>.nzb" abc_*.part* *.par2

    Now watch the magic happen!

    After the process you’ll find an .nzb file in the working directory. Keep this along with the backup_summary.txt on a secure place.
    You can later feed this file to a usenet client and it will download your backup automatically.

Of course you will put that stuff into a script ;-)

Restore Process

Restore is pretty easy.

  1. Use the .nzb to download your backup
  2. Use sha512sum and the backup_summary.txt to verify your files are good
  3. Do cat *.part* > backup_file.tar.7z.gpg to stitch your encrypted file back up
  4. Now decrypt with GnuPG and unzip the tar and extract it.

Final Notes

I use this approach now for a few days so i’m not totally certain that your files will in the usenet forever, but most providers have a retention time for a couple of years so you should be good. But as any good hacker you should have other backups on seperate offline media anyways (also encrypted of course).