This must be how my elementary school friends who were WWE fans felt.

I don’t get excited for sporting events or other TV shows. I’ll waste time watching good TV but I don’t get excited about it. It’s to something to do while I’m doing something else.

I can’t wait for the UFC 182 bout between Jon Jones and Daniel Cormier. Both of them undefeated, both of them extremely good at what they do. The stats and skills are so even that I can’t even guess who will win. There’s been a lot of bullshit leading up to the fight so I expect it will be good.

This must be how my elementary school friends who were WWE fans felt.

Docs and Spreadsheets

I worked as a middle manager in a large corporation so I’ve been on the sending and receiving end of emailed spreadsheets and Word docs. Why? Because that’s how business was done.

I worked in a training organization where we were required to report on training completion numbers. The official training completion report was created by me (and others) copy/pasting a list of names filtered from a website into a specific tab on a spreadsheet, deleting the extra cells that were not needed, and making sure the VLOOKUPS in the spreadsheet actually did what they were supposed to do. As manual as this sounds this was the automated version. We used spreadsheets as a business automation tool.

Since we didn’t have the option to just get completion numbers from the website we had to create our own solution. It probably took 5 people 200 or so hours (not including continued maintenance) to get to that level of automation when it would have taken a person on the web team an hour or two to just add in a completion percentage. This was the process for reporting numbers that carried penalties in the millions.

A lot of the reason we had to do the completion numbers like this is because we had agents who were on vacation, sick leave, FMLA, absent, or simply didn’t work for us anymore. It seems to me that since this is an employee and I am able to track that status somewhere it would be trivial to pull from our HR solution (PeopleSoft) and compare with the vendor’s site.

I think reasonable people could agree that this wasn’t a good solution for the problem we had.

I’m not quite sure how the idea for Google Docs came about but I imagine it went something like this, “Executive: I’m tired of having to email people Excel spreadsheets and Word Docs because I can never tell who is reading them or who they might have emailed them to” but it was probably way more creepy.

I’m also not sure what problem they are solving other than “I no longer have to email these docs or spreadsheets around” and creating the problem of “I don’t know what anyone has shared with me, when, or how to find it”. I spend about 30 minutes a week trying to find an old spreadsheet or doc that someone has shared with me. To combat that problem I’ve begun making a copy of each doc that someone sends me so that I’ll be able to find it. Another thing that trips me up here is the difference between “Drive” and the Apps, I don’t know what data will be stored where.

So what is the problem that is being solved with Docs? Is it that we need a minimally viable word processor in a browser (that breaks copy/paste about half the time)? Or the ability to share and restrict permissions to the file? Or the ability to allow everyone to view, some to comment, and others to edit?

Spreadsheets seem to exist to keep arbitrary numeric data and Docs seem to exist to keep anything longer than an email. Is that all? Billions of dollars over the last 20 years for something as simple as “arbitrary numeric data” and “more information than will fit in an email”.

I’ve seen a couple start-ups in the last few days that are trying to fill a perceived gap between what companies actually need and what is provided by Google Apps. Spaces, which was recently bought by one of my favorite group chat applications, Slack (affiliate link) is clearly aimed at the Docs aspect. The second is airtable which is obviously going after the spreadsheet aspect.

Let’s talk about OS package management.

This post is quite long so here are some links to the relevant material:

Introductions

On most modern operating systems there is a package manager that does the following: installs, un-installs, updates, and downgrades packages (software). The names vary by operating system as do the command(s) to invoke the package manager. (For anyone thinking that Mac OS X doesn’t have a package manager: I’m counting the Mac App Store as a package manager since it does the above. It is a smaller subset of packages and they all have to be blessed by Apple but it still performs the functions listed.)

I’ll be writing about YUM and RPM since those are the systems that I know. From conversations I’ve had with people who run Linux versions that use apt, apt-get, or aptitude most of what I’ve written here applies to it as well.

The most used versions of Linux: Ubuntu (Debian) and Red Hat the package managers are written in the Python programming language. The choice of programming language doesn’t really matter; the fact that it is tied to the version of Python installed on the machine is the problem. This brings me to a set of rules on which a better package manager should be based.

The Rules

1. Package managers should never be dependent on the system version of anything.

Perhaps you, dear reader, haven’t had the opportunity to re-install every RPM on a production machine because they were all deleted somehow. I have had that displeasure and it was not a good time. A package manager that becomes broken when the main version of Python on the system is upgraded from Python 2.4 to Python 2.5 or Python 2.6 is completely worthless. A package manager that no longer works if the system’s Python packages become corrupt is worthless.

This is not to say that the package manager versions shouldn’t be tied to a specific OS versions. Having a YUM 5.x for CentOS/RHEL 5.x makes sense. This means that the package manager is completely self sufficient and self contained.

2. Package managers should never be allowed to un-install themselves or their own dependencies.

This goes with rule number 1. The package manager shouldn’t depend on the system version of anything. No package manager should be able to break itself by un-installing its own dependencies e.g., you can use YUM to un-install glibc, a .SO file on which EVERYTHING in Linux depends. This would be fine if you were able to use YUM to install glibc again but you cannot because it depends on glibc to fucking work. The YUM version in RHEL 5 and up include a re-install command. If you’d think it would work in this case, you’d be wrong.

3. Package managers should be able to upgrade you from one version of an operating system to another with minimal downtime and minimal configuration.

Ah the days of CentOS 6’s release! All the awesome new features with a more sane default filesystem: ext4. (I wrote more sane; it’s still not a production file system.) How great and wonderful, we can upgrade CentOS 5 machines to it, right? NOT A FUCKING CHANCE LOSER. The official answer about upgrading between them was: re-install the machine. Wow, what a fantastic idea; just re-install the machine and restore all of its data and all of your customers’/users’ data, they won’t mind at all.

The only time I’ll accept the answer ‘re-install’ is when moving from a 32-bit OS to a 64-bit OS since that is a major ABI change. However, I still think there should be a way to upgrade from a 32-bit OS to a 64-bit OS with minimal downtime and minimal hassle.

Where’s the Code?

This is just my idea for a better package management system that would actually fucking work. I’m extremely hateful towards YUM and RPM because I’ve been burned too many times by just how shitty they both are.

There are no pull requests with this post, there’s no code, and there’s no suggestions for how to implement any of this because trying to change the way a Linux vendor does anything is pointless. Unless, you want to fork the project and spend the rest of your life maintaining it.

If you think it cannot possibly be that bad to change the way a Linux vendor does things: spend a couple days following some sysadmins and software developers on Twitter. You’ll see plenty of people who have tried to add their software to a distro or just update it and are met with constant bureaucracy.

If you write your own software and want to make it available for people to use: You are far better off building OS packages on your own using tools like FPM, or RPM itself and whatever Debian based systems use. You can host your own YUM/APT repo for pretty cheap and there’s 0 bureaucracy with which to deal. Another acceptable method for making your software available would be putting your code up on GitHub and include a Makefile that will build .deb and .rpm packages.

Conclusions

I just want a working package manager that isn’t dependent on system libraries that it can un-install on its own and also be used to upgrade me to a newer version of the operating system I’m running.

Are there things out there like this? Yes. I keep a Joyent SmartMachine around for the times when I need a VPS. It’s fairly cheap, runs SmartOS, and has a production filesystem: ZFS. Upgrading SmartOS from one version to a new one works pretty well. You simply follow Joyent’s documentation. The documentation isn’t updated very often which irritates me but if you check that there is a newer Quarterly release and follow the instructions your machine gets upgraded. SmartOS uses pkgsrc which is available on numerous systems. After logging out and back in your SmartMachine should be in the state that it was. In some cases I’ve had to re-install Python packages I was using but I’m not sure if that is the case for everyone. While SmartOS upgrades seem to work this isn’t a production machine and I don’t use it for anything that would impact users.

Now that I’ve distilled this caremad into more than a thousand words I don’t know why the systems are set up they way they are now. Who thought it would be a good idea for a package manager to be dependent on the installed system software? Who thought it would be a good idea for the package manager to be able to un-install its dependencies? Who thought that making people re-install an OS in order to upgrade to the newest version was a good idea? Why would anyone think that this is a good idea? I’m happy to listen to any of the reasons behind this. If you’d like to discuss further @ reply me on Twitter (@klyntonj). If this is just some remnant of times when hard drive space was a precious commodity or when RAM was scarce; I’d love to see it die.

Linux ruined my lunch.

Popped a DVD into the Lenovo (linux box) so I could watch a movie while I ate since both the Mac and additional monitor were already full of stuff. Totem launches asking if I want to play the movie, I click yes. An error pops up saying additional software is required to play this DVD because LINUX.

So I go to the help page to find out what it needs. Try installing the RPMs it says are available but they aren’t in any of the repos I already have installed because LINUX. Find that people use mplayer instead so I install the RPMForge repo and then install mplayer but it’s not added to any of the menus so I can’t just click on it to run because LINUX.

I go back to movie player and it finds the files it needed on the RPMForge repo and installs them but still can’t play the DVD because LINUX.

Log out and log back in to see if it’s been added it hasn’t because LINUX. Go to the mplayer docs and find that I have to run it from the command line like: `mplayer dvd://` because LINUX. Since I launched it from the command line it doesn’t accept any mouse input, I can’t double-click, right-click, or make it full screen because LINUX.

My food is now cold. I decide to put the DVD into the Mac and just move the stuff off the second screen. VLC just crashes, so I have to use Apple’s DVD Player app. Decide to write this shit down.

My food is completely cold and my lunch break is over because LINUX.

Here’s why I like Django better than Pyramid or Rails.

As part of my day job I get to see a lot of different web frameworks. I also get to see all of their dependencies, requirements, and craziness when building stand-alone installers for them. Some of the frameworks I see have a ridiculous amount of dependencies. Django is so delightfully simple to install and run. Here’s the install_requires from Django’s setup.py:

Z0FL:Django-1.4 klynton$ grep "install_requires" setup.py
Z0FL:Django-1.4 klynton$

None. The only thing you have to have is Python.

Here’s the list of install_requires from Pyramid:

install_requires=[
    'setuptools',
    'Chameleon >= 1.2.3',
    'Mako >= 0.3.6', # strict_undefined
    'WebOb >= 1.2dev', # response.text / py3 compat
    'repoze.lru >= 0.4', # py3 compat
    'zope.interface >= 3.8.0',  # has zope.interface.registry
    'zope.deprecation >= 3.5.0', # py3 compat
    'venusian >= 1.0a3', # ``ignore``
    'translationstring >= 0.4', # py3 compat
    'PasteDeploy >= 1.5.0', # py3 compat
    ]

tests_require = [
    'WebTest >= 1.3.1', # py3 compat
    'virtualenv',
    ]

if not PY3:
    tests_require.extend([
        'Sphinx',
        'docutils',
        'repoze.sphinx.autointerface',
        'zope.component>=3.11.0',
        ])

testing_extras = tests_require + ['nose', 'coverage']

Rails is even worse, here are all of the dependencies required to install Rails 3.2.1:

i18n-0.6.0.gem
multi_json-1.0.4.gem
activesupport-3.2.1.gem
builder-3.0.0.gem
activemodel-3.2.1.gem
rack-1.4.1.gem
rack-cache-1.1.gem
rack-test-0.6.1.gem
journey-1.0.1.gem
hike-1.2.1.gem
tilt-1.3.3.gem
sprockets-2.1.2.gem
erubis-2.7.0.gem
actionpack-3.2.1.gem
arel-3.0.0.gem
tzinfo-0.3.31.gem
activerecord-3.2.1.gem
activeresource-3.2.1.gem
mime-types-1.17.2.gem
polyglot-0.3.3.gem
treetop-1.4.10.gem
mail-2.4.1.gem
actionmailer-3.2.1.gem
thor-0.14.6.gem
rack-ssl-1.3.2.gem
rdoc-3.9.4.gem
railties-3.2.1.gem
bundler-1.0.22.gem
rails-3.2.1.gem

Oh, and these have to be installed in this order or the gem dependencies will fail causing the process to exit, this doesn’t include the Ruby version required to run this version of Rails. Why are you installing gems or python packages manually you may ask, here’s why: AVAILABILITY.

I know who owns my availability and it sure isn’t rubygems.org or Pypi (lmao if you trust the uptime of either) or anyone else but me.

Building an installer for Django only requires the Django-${VERSION}.tar.gz file, it doesn’t require any of the nonsense that Pyramid or Rails does.

Compiling and Installing tmux on SmartOS.

Update: The SmartOS repository starting at 2012Q1 has Tmux1.6 that can be installed by simply running: pkgin in tmux. Here’s a link to the documentation on how to upgrade to 2012Q1.

Since SmartOS was released I’ve been meaning to take it for a spin and see if I like it. Over the weekend I used the Joyent Cloud to spin up a SmartOS64 instance and play with some configuration. The list of packages is OK but not incredible and some of them are pretty outdated. The version of pkgin that comes with SmartOS is missing the provides command which is a real pain in the ass when trying to compile from source.

I, honestly, wouldn’t have ever finished the install without the help of my co-worker Ryan S who, it seems, is really good at tracking down dependencies that are failing and all sorts of other Unix problems.

To install tmux on SmartOS you’ll need to compile some things yourself. You’ll need to compile and manually install:

  • zlib
  • libevent
  • tmux

The versions of zlib and libevent that come with the pkg repo Joyent provides DO NOT work to compile tmux because they’ve been compiled with the wrong ELFCLASS and you’ll just get this error:

ld: fatal: file /opt/local/lib/libz.so: wrong ELF class: ELFCLASS64

There are some packages that you’ll need to install before you can compile and install these modules:

  • python27
  • openssl
  • gcc-compiler
  • gcc-tools
  • gmake
  • gtar
  • gzip
  • libtool-base
  • ncurses

After those are installed you need to install zlib, libevent, and finally tmux:

Installing zlib:

  • wget http://zlib.net/zlib-1.2.6.tar.gz
  • tar xvf zlib-1.2.6.tar.gz
  • cd zlib-1.2.6
  • CPPFLAGS='-I/opt/local/include' LDFLAGS='-L/opt/local/lib' ./configure --prefix=/opt/local
  • make && make install

Installing libevent:

  • wget https://github.com/downloads/libevent/libevent/libevent-2.0.17-stable.tar.gz
  • tar xvf libevent-2.0.17
  • cd libevent-2.0.17
  • CPPFLAGS='-I/opt/local/include' LDFLAGS='-L/opt/local/lib' ./configure --prefix=/opt/local
  • make && make install

Installing tmux:

  • wget http://downloads.sourceforge.net/tmux/tmux-1.6.tar.gz
  • tar xvf tmux-1.6.tar.gz
  • cd tmux-1.6
  • CPPFLAGS='-I/opt/local/include' LDFLAGS='-L/opt/local/lib -levent' ./configure --prefix=/opt/local
  • make && make install

You may run into a problem when doing the make step that looks like this:

gcc -DPACKAGE_NAME=\"tmux\" -DPACKAGE_TARNAME=\"tmux\" -DPACKAGE_VERSION=\"1.6\" -DPACKAGE_STRING=\"tmux\ 1.6\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE_URL=\"\" -DPACKAGE=\"tmux\" -DVERSION=\"1.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_CURSES_H=1 -DHAVE_DIRENT_H=1 -DHAVE_FCNTL_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_NCURSES_H=1 -DHAVE_STDINT_H=1 -DHAVE_B64_NTOP=1 -DHAVE_LIBXNET=1 -DHAVE_CLOSEFROM=1 -DHAVE_DAEMON=1 -DHAVE_SETENV=1 -DHAVE_STRLCPY=1 -DHAVE_STRLCAT=1 -DHAVE_ASPRINTF=1 -DHAVE_STRCASESTR=1 -DHAVE_STRSEP=1 -DHAVE_DECL_OPTARG=0 -DHAVE_DECL_OPTIND=0 -DHAVE_DECL_OPTRESET=0 -DHAVE_BZERO=1 -DHAVE_DIRFD=1 -DHAVE_SYSCONF=1 -DHAVE___PROGNAME=1 -DHAVE_PROC_PID=1 -I. -I/opt/local/include -D_XOPEN_SOURCE -D_XOPEN_SOURCE_EXTENDED -iquote. -I/usr/local/include -D_XPG4_2 -D__EXTENSIONS__ -D_POSIX_PTHREAD_SEMANTICS -std=c99 -MT arguments.o -MD -MP -MF .deps/arguments.Tpo -c -o arguments.o arguments.c
In file included from /usr/include/sys/types.h:33:0,
from arguments.c:19:
/opt/local/lib/gcc/i386-pc-solaris2.11/4.6.1/include-fixed/sys/feature_tests.h:362:2: error: #error "Compiler or options invalid for pre-UNIX 03 X/Open applications and pre-2001 POSIX applications"
make: *** [arguments.o] Error 1

If you get that error just edit the Makefile and remove the -std=c99 line and run make again.

Now run tmux and you should have a fully functioning tmux installation.

Apparently it isn’t elementary.

I started watching the Sherlock TV series sometime last week. Since there are only 3 episodes per season I went through it quite quickly. It was an OK few hours of television but it had the same hole as any other Sherlock TV show, book, and movie.

  • Sherlock does INDUCTIVE reasoning NOT deductive reasoning.

So the whole branch of science he “created”

  1. Already exists, it’s called deductive logic.
  2. Is completely wrong. He uses inductive logic.

For some reason, people writing adventures for Mr. Holmes have been unable to understand the difference between induction and deduction since 1887. Apparently there’s some other branch of logic that it actually is, called abductive reasoning. From the description it’s basic induction, some guessing, and finally after one has acquired these “facts” actual deductive logic.

I’m, probably, one of the few people who is irritated by this, aside from Philosophy professors who have to re-teach students the definitions. If only the creators were smart enough to see the difference; they could have used the correct terms to make the series more original and correct.

So ist das Leben.

Context Switching Sucks

During my day job as a sysadmin I spend all of my time in a text context. I read and write Python, English, bash, and on a bad day some SQL. Switching between text and speech, when someone comes into my office to ask me a question, can completely derail whatever I was doing. Context switches like this are fairly expensive when it comes to working on a hard problem, multiple levels down. Ted Dziuba has a great article about this on his site. I’ve also found that switches between programming languages are painful i.e., going from writing Python to Java without some sort of break in between.

Luckily, I work at home with non-technical people so I don’t often need to give verbal stack traces. I do, however, go to school during the day. I’m a senior and my degree is in German so most of my classes are, at least partially, in German and I’m expected to participate in conversation and reading. The context switch from English to German is pretty rough most days, it is especially so when I’ve spent all morning in a text context.

One, very unfortunate, day I spent about 5 hours straight writing Python before going to the German class. No one else in the house was awake, the kids were at school, the girlfriend was up late so she was asleep when I woke up. I didn’t speak a single word to anyone all day; I spent the entirety of the morning writing code…I don’t think I even chatted in IRC, at all, I just programmed all morning. In class we were just doing conversation and reading aloud from the book, nothing else that day. Going from Python to English to German to English to German (repeatedly) gave me a massive headache. By the end of the class my brain felt like it had been put through a meat grinder.

I spent the rest of the workday (2 hours) trying to get back to where I was when I left, dynamic languages seem to require more information be kept in a “stack” in the programmer’s head than static languages do. I’d planned a bunch of stuff for after work that evening, finishing a novel that I’d been reading and some homework for the Java class…It never happened. After work I did nothing but watch TV and take Ibuprofen because my brain was so over worked from the context switching; I could hardly function.

I don’t have a good solution for handling context switches like that, yet. I’ve experimented with spending some time just talking to people before having to go to the German class, I’ve tried watching TV in English on my lunch break before going to class, and I’ve tried taking more frequent breaks during the morning (with frequent notes about where I was, in the code, and what I was doing there). The last experiment seemed to be the most helpful, I didn’t feel quite as bad after the switches but it isn’t a complete solution i.e., there is still some headache involved.

Good things from my Java class.

I’ve taken the Intro to Java class three times at SUU. The first two times I took it I had surgery and I wasn’t able to finish the class.

The first time

The first time was absolutely awful. The professor showed up to class the first day, told us to go to his class files and download his customized version of Wordpad that included a few batch scripts and other hackery to set the CLASSPATH correctly and open a terminal to the right directory for compiling.

Our book was about working with multimedia in Java. It wasn’t a “learning Java” book. We were assigned reading every day, it didn’t match up with what we were actually doing. We were also assigned reading of the Appendices that actually did match up with what we were doing.

Each class period we would spend 40 – 45 minutes writing code that the professor was writing on the overhead projector and 5 minutes finding out what our homework would be. Mostly the homework was taking the code we had written in class and modifying it in some way, basic script kiddie shit, not actual programming.

There were no quizzes.

The exams were in this format:

50 minutes:

  • 20 term definitions,

  • 15 multiple choice questions, and

  • 3 – 5 coding problems; no computers allowed.

The coding problems were written on paper, from memory, syntax and compile time errors counted as missed points. The midterm was 3 programs, each of them more than 25 lines of code.

It was total bullshit.

The second time

The second time wasn’t too bad, there was quite a bit of homework but I didn’t attend long enough to take a quiz or exam.

Setup

The class was set up so that we would spend the maximum amount of time practicing writing Java rather than learning about syntax and how compilers work. We got a 50 minute presentation on Java’s basic syntax, data types, and an approximation of how the compiler worked. It was just enough that we could use it as a reference when actually writing Java code.

We didn’t start out with Eclipse or Notepad like some classes do. We started with BlueJ, it’s just functional enough that you don’t have to worry about setting up a CLASSPATH for compiling or using the right Java SDK but it doesn’t give you too many hints about the code.

After we learned the syntax and some data types we went into a workshop format. We would show up to class and spend the time writing the different programs on the worksheet list; each of them designed to teach us one of the concepts we learned. After a couple simple exercises the difficulty would increase usually by using multiple concepts at the same time or combining multiple data types.

Once finished with each of the assignments we’d have to compile and pass them off for the professor.

Grading

Each assignment assigned as part of a workshop was worth from 1-8% of our grade, each quiz was worth 6% of our grade, the midterm was worth 20% of our grade, and the Final project was worth the rest of the remaining percentage. Nothing else mattered as long as you could write the code.

Quizzes

Each quiz, there were three total, was a programming problem. We had 45 minutes to code a solution to the problem listed on piece of paper that was handed out. The problem was usually one of the harder problems that were assigned during the workshop with information changed. They were pass/fail, a full 6% or 0%.

Midterm

The midterm was 5 parts: part one, multiple choice question set about Java syntax and data types; part two, accreditation question set (2 questions); parts three – five, the programming question from each of the three quizzes IF you didn’t pass them the first time. If you passed the quizzes the first time the midterm was about 6 minutes long.

Final

The final was just a final project. Pick something that is interesting or that you want to learn more about then discuss it with the professor and code it. We had about a month to build the final project with each class period being a workshop with the professor and tutor able to help with anything in the project.

This was the best, by far, of the programming classes I’ve taken. It allowed me to not show up to class as long as I understood what was going on and I was able to pass off the assignment which was a huge incentive to me. It was definitely worth the time I spent on it.

Problems reading and burning DVDs with a MacBook Pro

Over the last couple months I’ve been having more and more problems with my MBP reading and burning DVDs, honestly I’ve only tried to burn one DVD in the last two years so it may have been broken for longer. When I actually needed to burn a DVD, before the session could open, I would get an error that there was something wrong with the media and the disk would be ejected.

Numerous solutions I found online were to:

  1. Take the MBP apart, pull out the SuperDrive, take it apart, and clean the lens.
  2. Find a thin object and a smooth cloth to shove into the drive repeatedly to clean the lens.

I decided I would try option 2, while I was looking for a clean, soft cloth to use I was reminded by Katrina that shoving something into the drive is a really dumb thing to do. She suggested just using the canned air I keep in my desk to get rid of dust.

Aha! Finally a sane solution. I used the canned air, two full passes (up and down) trying to keep the airflow even, popped in a DVD that 3 hours ago wouldn’t read and it worked. I tried burning the DVD that I’d tried 20 minutes before and it worked.

So, the real solution to an MBP disk drive not working, MBP not reading DVDs, MBP SuperDrive not working, MBP DVD burn failed, and any number of other phrases is actually:

  1. Use canned air to clean the dust off the lens.