I decided to reactivate this blog and document my experiences as a mentor at Google Summer of Code.
If you don’t know what Google Summer of Code is, here’s the tl;dr: It’s an annual scholarship program by Google, where students from all around the world get sponsored to work full time for three months. The interesting thing is, that they don’t work for for Google, but for a number of open source projects.
This year, coala, an open source project I joined last year is one of the lucky organizations selected for the program. Having already participated last year as a sub-organisation we got selected as an independent project for the first time and were granted 10 slots right away. This is unusually high for such a young project and we are extremely grateful for the trust that Google have put in us.
After receiving an overwhelming amount of proposals of the highest quality the project admins had the difficult task to select 10. Saurav Singh aka damngamerz is one of the lucky winners and I have been assigned to co-mentor him. Check out his blog post if you want a taste of the excitement and info about his project in particular.
As this is a very new experience for me, I’ll try to document it in this blog. We’re still very early in the program and so far had some nice chat and phone sessions setting everything up. I am very excited to see how this summer will progress!
Recently, I have been doing a lot of work a server administrator would normally do. I set up a continuous integration system and played around with virtualization at work and deployed a couple of python apps for a side project. Being a software developer, this has been both a challenging and fun experience. It also got me thinking about all the buzz about dev-ops you read nowadays. Dev-ops (development and operations) refers to a company structure where the development and server administration teams work very closely together or even without a clear distinction between the two fields. I am very much in favor of close collaboration, but the approach also has its caveats.
As a developer, my main responsibility is to create and maintain good software and although I know the basic concepts I am not trained in systems administration. Yes, I can install a server and get my application running. I will also go ahead and disable root login via ssh to prove my security lectures were not a complete waste of time. But without following the subject closely and practice and improve my skills on a day-to-day basis, like a full-time sys-admin would, it is very hard to bring a system in a state where security best practices are followed, redundancy is properly implemented and good sleep can be found at night.
For this reason it is the least we can do as developers dropped into the engine room to document very carefully what we do. Because what will happen is, we will get it to work, walk away and forget about it. Then a week later something will break and we’ll have to put on the old sys-admin gloves once more only to redo the whole research and learning process again. Or maybe somebody asks you how you set up the external juggly-tubes on that one server. „The what? Gee, I don’t remember.“ A good documentation – a recipe on how to get a specific job done – can save the day.
And the best documentation when it comes to server administration is automation.
Automation comes in many shapes and forms depending on the job and the personal preferences of the people involved. It can be an elaborate shell script or a complex automation tool. In any case you’ll end up with something that describes the task you were trying to complete and it also does complete the task. Perfect. Next time somebody calls you to tell you the fancy new app on the virtual cloud docker thing has crashed you can just press a button and automagically re-deploy the thing.
So, please don’t have any machines in an undocumented state running in your production or development environments and best make sure everything is properly automated and the process is reviewed every now and then.
I recently needed to create a new CentOS 7 base box for Vagrant using VirtualBox. Since I ran into some road bumps, here is a short list of them and how I managed to overcome them:
It is recommended to install dkms before installing the VB GuestAdditions to avoid having to recompile after a kernel update. dkms is not in the centos repos, but can be found in the rpmforge repositories. However there is no rpmforge for CentOS7. I used EPEL instead, which has a repo for CentOS7 and also contains dkms.
I use Linux Mint 17 as the Host, which currently has VirtualBox version 4.3.12 in the repo. I create a fresh machine and installed CentoOS 7 from the Minimal ISO. When I tried to install the Virtualbox Guest additions (which are required for Vagrant) I got the following error:
Verifying archive integrity... All good.
Uncompressing VirtualBox 4.3.12 Guest Additions for Linux............
VirtualBox Guest Additions installer
Copying additional installer modules ...
Installing additional modules ...
Removing existing VirtualBox DKMS kernel modules [ OK ]
Removing existing VirtualBox non-DKMS kernel modules [ OK ]
Building the VirtualBox Guest Additions kernel modules
Building the main Guest Additions module [FAILED]
(Look at /var/log/vboxadd-install.log to find out what went wrong)
Doing non-kernel setup of the Guest Additions [ OK ]
Installing the Window System drivers
Could not find the X.Org or XFree86 Window System, skipping.
After some research I found this thread on the virtualbox forums, which recommended using a newer version of the GuestAdditions ISO. I downloaded version 4.3.14_RC1 and it installed just fine.
When starting the box in Vagrant I got :
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p /vagrant
Stdout from the command:
Stderr from the command:
sudo: sorry, you must have a tty to run sudo
Turns out I had to comment out requiretty and !visiblepw in the /etc/sudoers
file. (See this issue for details)
Data backups are probably one of the most annoying topics in the world of computers. Yet they are a necessity if you want to minimize your risk of data loss. There is tons of commercial products that do the job and probably more strategies employed than people living on earth.
So I just want to describe my approach and if someone comes across this I am happy to hear about your opinion in the comments!
I generally divide of all the files on my computer in three tiers:
Completely reproducible: Everything that is reproducible with low or no effort. For example all the stuff that comes with a Linux distro, binaries, etc.
This tier obviously needs no backup.
Large non-reproducible files: Music, Pictures, Movies, etc. Everything that I want to keep but it’s too large to just keep it in a Dropbox folder or something comparable.
Small non-reproducible files: Mostly documents like invoices, CVs, config files. These files are small and thus easier to handle in terms of backup.
So with two different tiers of files that need backups I also have two different strategies that I use:
External hard drive Backups
I have a little one liner shell script that just calls rsync with a couple of parameters and creates a copy of all the files in my home folder. The files in the exclude list file are skipped, for example the „Downloads“ folder where I generally have large stuff that I don’t really need lying around. Have a look at the script here.
There are two Problems with this approach:
First: It’s not technically a backup, it’s just a copy. A „real“ backup has to be incremental, such that I could go back to any snapshot I ever took. With a copy if you ever damage an original file and run a backup that file is lost. But I decided that it is good enough for me.
Second: It’s not off site. It does not protect my data from a fire or a very thorough thieve who steals both my computer and the external hard drive.
Encrypted Dropbox Backups
Dropbox is a service that I guess many people use for file backup and it’s great and easy to use, but unfortunately in the post-Snowden era we have to assume that everything that is on Dropbox is readable for at least the US-Government agencies. If you properly encrypt your data before you send it to Dropbox however you’re good to go.
That is if we assume that the employed encryption algorithm is unbroken. If you followed the Snowden revelations you might feel uneasy to trust any kind of encryption, but just as Bruce Schneier says: I trust the mathematics.
So I wrote a little script that collects all those smaller files I want to backup, packs them into a tar-gzip archive with a time stamp in the filename, passes this to gpg, which uses the CAST5 Algorithm to encrypt the tar file using a user supplied password.
Finally, the encrypted file is moved to the Dropbox directory and thus automatically uploaded to Dropbox. Have a look at the script here.
I don’t have s strict rule when I execute those script, I just do it sometimes when I think of it, which obviously is not optimal and I will try to find a doable way of doing it more regularly.
So that’s it, please let me know what you think about this in the comments or describe your approach.
A journey has ended! I have finally replaced the wifi card on my Lenovo Thinkpad T420. This is my Story:
I bought this computer at the beginning of 2012 to replace my old Samsung R560, which was three years old and still kicking, but it had a couple of those annoyances that laptops which are made for entertainment have: A glossy hull, useless extra media contol keys and the the inability to be opened with one hand. So I kept that machine as a secondary computer and purchased my T420 because I thought the Thinkpad T-Series are glamour free work horses. I also assumed the device would function under Linux without problems. And I was almost not let down: The T420 is performing like a beast, highly portable and has an insane battery lifetime even under Linux. The only problem was that the wifi seemed to have some problems.
I had random connection failures where no data would come trough even though the connection was still displayed as active, sometimes it wouldn’t connect at all and all that was dependent on the wifi infrastructure. (It worked almost flawlessly at home but problems arouse at any other place including my university (which is a big deal obviously)).
Initially I thought it must be a linux/driver issue and searched around the web. Eventually I pinned the problem down to my Wifi card, a „Realtek 8188CE“. The sole mentioning of that name in a forum will get people to offer condolences. Apparently the issue is not Linux specific, but it’s worse there. I tried downloading and compiling the latest driver from Realtek with no success. So I quickly made the decision that the module has to be replaced. Replacing the hardware turned out to be easier than I expected. The real problem was to get the device to accept the new card.
Lenovo ships their laptops with built in white lists in the BIOS which checks Wifi and WWAN cards for a Lenovo branding, a so called FRU number. If it finds an unbranded module it will refuse to work.
So I contacted Lenovo and asked where I could purchase such a branded card. The answered with a generic e-mail saying „please find a list of our licensed stores attached“ with a list of all online shops known to man as an attachment. So I found a fitting card at Cyberport.de where I also bought the Thinkpad, contacted them and asked about the branding. They said they were not sure but assumed it would work. When the card arrived I quickly put it in only to be greeted with a friendly message claiming to have found an „unauthorized network card“. „Computer says nah…“ (By the way: After I explained this to Cyberport on the phone they offered to take the card back. Thumbs up!)
So no branding on that card. I contacted Lenovo support again this time via phone. The stressed lady with a foreign accent collected all my personal information before letting me ask any questions. After I told her about my problem I got an answer that boils down to this: „If you’r card is not from Lenovo it will not work. This is not a defect so I cannot help you! Linux??? We are not supporting Linux. Good bye.“
The weirdest thing is that neither her nor her colleagues answering to my e-mails could tell me where I could actually buy a Lenovo branded card. I made it explicitly clear that I do not want this to be treated as a warranty request or anything. I wanted to buy a wifi card… you know…. with money. No chance. All I got was the generic list of „authorized dealers“. Apparently Lenovo disallows the use of third party cards without offering cards themselves… so…. no cards for you!
Eventually someone in the Lenovo Forums provided me with the FRU numbers of some of the wifi cards Lenovo uses. Searching for those directly yielded some results and I was eventually able to buy a used „Intel Centrino Advanced-N 6205“ from a German online store. The card arrived today and I was finally able to boot with a non crap card. Linux automatically loaded the drivers and all I had to do is re-entering the WPA-2 key for my home network. I have yet to test the card on other networks but I am really optimistic. The remainder of this article will be a short how to on the issue.
How to replace the card
If you want to replace your card you need to make sure that you chose a compatible one. The most obvious thing to check is the number of antennas: If the new card has more antennas than the current one, you need to install additional antennas behind the screen and I guess you don’t want that. So check how many antennas your card has (the Realtek 8188CE has two) and chose a replacement accordingly.
Then try to find out the FRU number of this device, I found my number in the Lenovo Forums but I guess contacting support is also worth a try.
Now search for the FRU number directly. It is my understanding that you can’t buy new Lenovo branded modules so you need to find a used one. If you live in Germany or do not fear shipping costs check out www.nbwn.com, they have a great selection of used notebooks and parts.
Once you obtained a card follow the instructions in this video to replace the old card or my following step-by-step instruction pictures:
Now insert your new card and re-assemble your device.