Effective C++: My understandings of the items from the book – 1

I’ve recently finished reading what is widely regarded as the goto book for learning the best practices for writing clear maintainable code in C++: Scott Meyer’s Effective C++.
I thought, as a way of better committing what I’ve learned to memory, that I’d summarise each topic that the book covers. I also thought, why not share my thoughts with the world, in case anyone else would benefit from the slightly different perspective I’ll have.

The first point in the book is that C++’s weird rules that often seem quite inconsistent are made simpler when you think about the language not as a single language, but as a few related languages:

  • The C part:
    This is the base part of the language, the parts that C++ shares with C. The idea of headers, blocks, statements, parts of the preprocessor including #defines and #ifdefs etc (but not templates). The built-in types are also in the C part – these are the types which in the Java world would be called primitives. This includes all the ints, pointers, bools (even though C didn’t have bools until C99), floats, doubles, etc, but excludes things from libraries and classes. Non-member functions are also C like.
    Things done in the C style tend to pass-by-value (well technically everything is always pass-by-value in C++, but with OO C++ we can hide that so it practically becomes pass-by-reference (I don’t mean c++ references)).
  • The Object Oriented C++ part:
    This is the standard C++ part, with classes, constructors, methods, exceptions, polymorphism, inheritance, encapsulation, virtual functions / dynamic binding, namespacing, etc.
    The size of the objects here tend to balloon quite a lot so passing-by-reference is usually preferred.
    To reconcile this with the claims that everything is pass-by-value in C++: to pass-by-reference, you pass an object with pass-by-value but the object is a reference, by which I mean a pointer or a reference (to const, usually).
  • Template C++:
    This is the crazy part of C++ where you tell the compiler to write the code for you. There’s plenty of gotchas and rules which apply to C++ in general but don’t apply to templating and vice-versa. This is also the part which gives rise to the other paradigms that C++ apparently supports, as templating is turing complete in itself, but such template programs run entirely at compile time.
  • The STL:
    This is the part of the standard library which came from the Standard Template Library. This includes all the useful new types that you get with the stl:: or std:: namespaces, like strings, vectors, maps, trees, algorithms, etc. This stuff is so magical that there’s a whole nother book in this series specifically about using the STL effectively, but simple usages are relatively straight forward, at least in my experience.
    Beware though, although using the STL feels a lot like OO C++ part of the language, it also tries very hard to mesh into the C part of the language, so you’ll often find that the C style rules apply to the STL rather than the OO C++ style rules.

 

Since reading this book I have found that C++ has sat much nicer in my mind, being able to organise the different behaviours and conventions into these 4 categories has helped to clarify and explain some of the quirks.

Recover ubuntu box from hack, part 12: Setting up RVM again

RVM (in this case) refers to Ruby Version Manager, which is a tool that makes it easy to manage Ruby versions and gem sets. This tool was recommended to me a by a friend who knows much more about Ruby and Rails and he was helping me to learn a bit. So I decided to set it up again on my server.

As per the instructions on the website, there are a few prerequisites:

sudo aptitude install git-core curl

Now that I have these things installed, I can run the script from the RVM site:

bash < <( curl http://rvm.beginrescueend.com/releases/rvm-install-head )

Now I have to add a line to my .bashrc file so that RVM works properly at each boot:

[[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm"  # This loads RVM into a shell session.

Now I need to get the distribution-specific dependencies etc.:

rvm notes

This tells me I need to do the following:

sudo aptitude install build-essential bison openssl libreadline6 libreadline6-dev curl git-core zlib1g zlib1g-dev libssl-dev libyaml-dev libsqlite3-0 libsqlite3-dev sqlite3 libxml2-dev libxslt-dev autoconf libc6-dev

Now I just need to install ruby 1.8.7 as it is a prerequisite for 1.9.2 (not sure why any more, but I read it somewhere) then install 1.9.2 and set it as default:

rvm install 1.8.7 rvm install 1.9.2 rvm default 1.9.2 ruby -v ruby 1.9.2p136 (2010-12-25 revision 30365) [x86_64-linux]

Woo its all set up and working. Now I need to cd into my old development directory and see if I can start the rails server!

rails s

Didn’t work. Hmm. Oh I don’t have the rails gem installed. Of course!

gem install rails
rails s

Still no. Hmm. Ah bundler…

bundle install 
rails s

Yay! Hurray! Rails server running and I can resume development.

Recover ubuntu box from hack, part 11: Ensuring sane behaviour when a drive dies

So I would prefer, since it is capable of it, that mdadm automatically email me when my RAID looks like it might fail. I found the command to test if its working:

mdadm --monitor -1 -m lynden /dev/md0 -t

The monitor flag puts it into check mode, the -1 tells it just run once the -m specifies the email address (here a user on the local system) , then the md array to check is specified then -t tells it to send a TestEvent. A few minutes after I did that and my CLI told me I have mail. I checked it with:

cat /var/mail/lynden

This gave me raw email format. I decided I needed to find a CLI based email client. Did some searching and found one that sounded good:

sudo aptitude install mutt

Works excellently. Then I set the  address in /etc/mdadm/mdadm.conf to my username at my server. Then I did test without specifying recipient:

mdadm --monitor -1 -m /dev/md0 -t

after playing around for a couple of minutes, bash told me i have mail in /var/mail/lynden. Opened mutt and sure enough its there.  Everything should be swell now. Now to turn off my computer, pull out a drive, and see how it reacts.

Damn. It didn’t boot. Sat at booting screen saying the md array could not be started. Keep waiting, S to skip, or M for manual recovery. OK back to do some more searching. Ok. Skipped both the mdadm failure and the resultant mount failure. From the CLI I tried to mount the array. It came back saying it assembled the array with 1 drive. After some investigation, it seems the drive I removed (was) the middle one of the array, whereas I thought it was the 3rd disk. The arrays on the drives aren’t uniform (i.e. the partitions used are sda2, sdb2 and sdc1, so pulling out /dev/sdb resulted in sdc being named as /dev/sdb. Since, in the mdadm.conf file, I specify exactly which partitions it should use, it is looking for sdb2, and sdc1, both of which do not exist at this point. So I just removed that specifying line and uncommented the original line “DEVICE partitions” to allow mdadm examine all partitions itself.  Then tried to assemble the array again, and it then assembled correctly with 2 of 3 devices. So now  it will assemble. And hey, mail in my inbox! Indeed, it was mdadm reporting an actual degraded array.

So Rebooted again but it still failed. Seems mdadm just won’t automatically boot with a degraded array. And this is precisely the problem: after further research this is intended behaviour, but behaviour which can be changed correctly: there is a line in the file /etc/initramfs-tools/conf.d/mdadm which says “BOOT_DEGRADED=false” which I just need to change to “true”. Did this and rebooted and everything worked perfectly fine. Now was time to try it again with all 3 drives plugged back in. Once again it didn’t go as expected:  it wouldn’t boot again because error occurred mounting share (mount, not mdadm). Skipped mount and manually mounted it via CLI. Worked without error. Trying reboot again to see. Worked fine. Interesting. Wonder why that is? Must be something about auto mount remembering the exact configuration of the drive it is mounting from last time, and so mounting it manually (still only from fstab though) allowed it to then auto mount again from then on.

Recover ubuntu box from hack, part 10: Setting up transmission BitTorrent client

This went particularly smoothly this time I think. I’m fairly certain there was a lot more confusion last time – especially with where the configuration file is. Anyway, this is how it all went down:

sudo aptitude install transmission-daemon

transmission-daemon is the package which includes the web interface and the CLI client. The web interface is nice but missing some features. The CLI is a bit nicer for automatically added a large number of torrents. Anyway I tested it from my desktop and it’s running, because it gives the error about not being in the allowed IPs list in the settings file.

I remember last time I had a really hard time working out which settings file it used. There seemed to be at least 3. So this time I decided to actually work it out:

lynden@lfs:~$ sudo find / -name settings.json
/etc/transmission-daemon/settings.json
/var/lib/transmission-daemon/info/settings.json

Ok so there’s two. I decide to work out which one by changing the port of the first one and reloading the service. If I get the same error after, then it’s the other one.

sudo vim /etc/transmission-daemon/settings.json
sudo /etc/init.d/transmission-daemon reload

Test it, and the error changes to a straight cannot find web page. So that’s the correct config file. To be sure I don’t forget, I decide to go and change the other one by putting a comment at the top mentioning that it’s the other one. So I do that. Then I go back to the first one again to continue setting up, but hey! The warning comment is in this one now! Double check I didn’t open the wrong file, nope. So turns out that the only 2 config files on the system this time are symlinked together. Well that makes things easy.

Anyway, back to the settings: I add the entire subnet to the allowed list, remove log in requirement, set the download directory to be on the RAID share. Now I just need to mount the old system and get the .torrent files and resume information:

sudo mount -o loop /media/share/public/image /media/oldmachine
sudo cp /media/oldmachine/var/lib/transmission-daemon/info/torrents/* /var/lib/transmission-daemon/info/torrents/
sudo chown debian-transmission:debian-transmission /var/lib/transmission-daemon/torrents/*
sudo cp /media/oldmachine/var/lib/transmission-daemon/info/resume/* /var/lib/transmission-daemon/info/resume/
sudo chown debian-transmission:debian-transmission /var/lib/transmission-daemon/resume/*
sudo /etc/init.d/transmission-daemon reload

Done. Check it in my browser again and it’s all working. Awesome. That was quick. One more quick reboot to make sure it comes back by itself – yep. All done. Torrenting set up once more.

Recover ubuntu box from hack, part 9: Reconfigure samba file sharing

Before the hack I had set up a shared folder which is the main reason for having the RAID set up. I want to be able to share any media throughout the house as well as provide a place for people to store any data they might want to back up. To this end I had samba set up. So again I set it up:

sudo aptitude install samba

now I need to adjust the options:

sudo vim /etc/samba/smbd.conf

Uncommented the ;  security = user line. Set up a couple of shares. Reloaded the configuration:

sudo reload smbd.conf

Tested it. Works. One of my shares forces guest only, and one of them requires a log in. It took a very long time to work out how to log in. It kept saying “… Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. …”. After doing a bit of research I found I could kill all current connections with a command in my windows command prompt:

net use * /delete

This kills all current network share connections, including any network drives. This was not a good solution and a bit more searching told me that its just how Windows works with network shares. That MS KB article suggested a workaround, which I will use, which is to log into \\server-name for one remote user account and \\server-IP for the other account.  This seemed to stop that error message from showing again, so long as I remember which user is for which address. This workaround is good enough, since it pretty much should only ever be me that needs it, and only when I’m testing things, or logging into my own private share from another user’s computer who has been logged into their private share. However I was still not able to come up with a working log in. A bit more research and stuffing around and I found out that the line of the configuration file about syncing with Unix passwords doesn’t actually mean it just checks the user-name and password against the server user-names and passwords. So I had to create a samba user-name and password:

sudo smbpasswd -a <username>

Once that was done I had to work out what domain it wanted. Seems it will only allow me to log on with workgroup\<username>. The workgroup used is the one specified in the smbd.conf file. I was sure I didn’t need to specify workgroup for my last set up, but I concede that I may have not set it up as properly last time.

Here is a dump of the useful parts of the smb.conf file – as given by the command ‘sudo testparm /etc/samba/smbd.conf’:

[global]
        server string = %h server (Samba, Ubuntu)
        map to guest = Bad User
        obey pam restrictions = Yes
        pam password change = Yes
        passwd program = /usr/bin/passwd %u
        passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .        unix password sync = Yes        syslog = 0        log file = /var/log/samba/log.%m        max log size = 1000        dns proxy = No        usershare allow guests = Yes        panic action = /usr/share/samba/panic-action %d
[printers]
        comment = All Printers
        path = /var/spool/samba
        create mask = 0700
        printable = Yes
        browseable = No
        browsable = No
[print$]
        comment = Printer Drivers
        path = /var/lib/samba/printers
[public]
        comment = RAID public area
        path = /media/share/public
        guest only = Yes
        guest ok = Yes
[public - rw]
        comment = writable version of public share
        path = /media/share/public
        read only = No
        create mask = 0755
        directory mask = 0777
[guest area]
        comment = writeable area for guests
        path = /media/share/public/guest area
        read only = No
        create mask = 0755
        directory mask = 0777
        guest only = Yes
        guest ok = Yes
[lynden - private]
        comment = Lynden's personal stuff
        path = /media/share/private/lynden
        valid users = lynden
        read only = No
        create mask = 0700
        directory mask = 0700

The reason I have the main share set up with a read only main area, and rw version and a guest rw area is so that people who are visiting or get onto our network without permission can see what I have but cannot modify anything. House mates who want to use the RAID for storage can put things into the guest area, or if they want I’ll set up a user-name for them. The rw version of the main area is so that users I know and trust can modify the share if they want. However this security trades off the simplicity and ease of use of a straight public read/write network share, as my girlfriend would prefer.

I think it works a little better this time, and I definitely have a better understanding too.

Recover ubuntu box from hack, part 8: Reconfigure RAID5 array with mdadm

So this time I’m going to retell my experience of the simple task of reconfiguring the RAID 5 array. This I already did from the live CD, when I was making sure I had all of my things still, so this was going to be fairly straightforward. All I had to do was install mdadm and set up the configuration file. So the first step:

sudo aptitude install mdadm

Excellent. Now I find the configuration already exists in the /etc/mdadm/mdadm.conf file, as mdadm found the RAID partitions already. However the RAID wouldn’t start:

sudo mdadm --assemble /dev/md0
mdadm: /dev/sdc1 has no superblock - assembly aborted

“Hmm”, I thought. “Maybe I’m doing something wrong. OK – I’ll just get the old /etc/mdadm/mdadm.conf file from the old installation. Where did I put that backup?” *sigh* The backup was a disk image stored on the RAID array. Cool. So no choice but to work out how to use mdadm properly.

Then I tried

sudo mdadm --assemble /dev/md0 --verbose /dev/sda2 /dev/sdb2 /dev/sdc1

But mdadm reported that it had started the array with only 2 of the 3 arrays. Why the hell is it doing that? I wonder.

Ok. So did some research along with a mate from work. Turns out there’s actually a bug in Linux that causes the kernel to do something stupid preventing mdadm from mounting RAIDs. I find the work around on that page which is to mount the offending partition to a loop device and then use that in the array instead:

sudo losetup /dev/loop0 /dev/sdc1
sudo mdadm -A /dev/md0 /dev/sda2 /dev/sdb2 /dev/loop0

This worked fine. But this was hardly a good solution. More research to be done.

Eventually I found that /dev/sdc1 – which was one of the RAID partitions – was being claimed by an array called /dev/md_d127, thus holding the drive from being assembled in another array. This I confirmedby checking /proc/mdstat. Simply running

sudo mdadm -S /dev/md_d127

to stop the unknown array allowed me to then run the correct RAID assemble command. So now I had a working RAID again. Rebooted and checked – sure enough the unknown array is running again, and mine won’t assemble. So I stop the unknown one again, and assemble mine. I create a directory to mount and then mounted the RAID to it:

sudo mkdir /media/share
sudo mount /dev/md0 /media/share

Then I go and mount the image from the share to another new directory:

sudo mkdir /media/oldmachine
sudo mount -o loop /media/share/public/image /media/oldmachine

I get the mdadm.conf file from that and overwrite my own. Restart the machine and nothing at all happens differently. *le sigh*.

After another couple of hours of playing around, another friend comes online. I ask him if he’s had experience with mdadm before. He has, and he has his own RAID array going currently. Excellent. I explain the situation and he asks me what type of the RAID partitions are:

sudo fdisk -l /dev/sda

Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0000dda4

 Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        6079    48827392   83  Linux
/dev/sda2            6080      243201  1904682465   fd  Linux RAID autodetect

He goes on to tell me that ‘fd’ will be automatically, and often incorrectly, auto mounted by the kernel – I need to change it to type ’83’ which is for Linux file systems like ext2,3,4 etc. That way the OS won’t auto mount it and give mdadm a chance to.

So I use fdisk to change the partition types, so with each disk I do:

sudo fdisk /dev/sda
... 
Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): 83
Command (m for help): w

and so on with the other disks. ‘t’ specifies I need to change the type, I choose partition 2 as its /dev/sda2 I need to change, then 83 is the type for ‘Linux’, then w writes the changes. Then it says something about not being able to write the changes as the disk is in use at the moment. So once I’ve done that to all the drives I reboot.

I check /proc/mdstat to check if it’s solved the problem. Nope. After 15 minutes of wondering what to do now, I reread the chat with my friend – After changing the partitions I need to reconfigure mdadm:

sudo dpkg-reconfigure mdadm

Then I reboot and its running correctly. Then I just edit my fstab file to include the RAID:

sudo echo '/dev/md0    /media/share    ext4            ' >> /etc/fstab
sudo mount -a

Check the mount and its working. Rebooted and checked again – still fine. Awesome. A whole day on this stupid mess, which took me all of 30 minutes from a live CD, simply because it just worked that time.

 

This post just goes to show the trade off of solving the problem yourself versus asking someone who knows: Sure I learned a fair bit in the several hours I took researching the issue, but the most relevant stuff I learned was during the 15 minutes it took to solve it when talking to someone who knows.

 

Recover ubuntu box from hack, part 7: set up dynamic dns service updater

So for me to be able to find this server from anywhere on the Internet, I need to have a way of knowing my IP address. Since our internet connection has Dynamic IP, I need to use a Dynamic DNS service, such as http://www.dyndns.org

Now most half decent routers have a way of updating the DDNS service automatically when our IP changes, and ours does. However when I put in the settings and click save, it says its successful and working but the settings are gone and in previous tests, it doesn’t actually do anything.

So I need set up software on the server to update the DDNS service. A package in the repos does just that: ddclient. This is really straightforward to set up:

sudo aptitude install ddclient

then go through and answer the questions as asked and it will sit there on the computer and periodically make sure the IP is up to date on the DDNS service. Only I managed to press the space bar to select the domain name I wanted to have updated, and the only way to remedy this is to skip to the end of the configuration, and run

sudo dpkg-reconfigure ddclient

to start the questions over. This time it worked. I can’t really be bothered testing this, it’s not that important anyway. I’ll just wait till it doesn’t work.

The downside of this software set up as opposed to using the router though is that if the server is off and our IP changes, I won’t be able to remotely turn the server on with a magic packet sent to our IP. Hopefully this isn’t a big issue as I plan to just leave the server on all the time.

%d bloggers like this: