I needed a tool to alert me when my cron jobs silently fail. There are already some existing services for this, but it seemed like a fun thing to build myself. A month of spare-time hacking later, I present to you:

I am using this myself and it has already been useful for me a couple times. Say, a seemingly benign code change in one service causes my batch job to fail 12 hours later, in the middle of night. Without any monitoring I might be blissfully unaware for days or months, until I need those backups or whatever, but now I get an email alert and can get it sorted in minutes. Sweet!

I licenced this under BSD licence, hoping it might be useful for other people too. It’s such a simple service it feels wrong to charge big bucks for it. You can grab the code from Github, run it, extend it, add unicorns or raptors, and so on. Or you can use the hosted service which is free. I cannot make guarantees that I’ll keep the hosted service around for ten years, though. The running costs for me currently are: $5/mo DigitalOcean box, two domain names and SSL certificates, a bit of space on S3 for daily backups, and my own time for maintaining this.

On implementation side, it’s a pretty straightforward Django app with nothing particularly clever going on, which is a good thing. It does make use of a database trigger (which works with both PostgreSQL and MySQL), and it has some sweet JS horizontal slider widgets for setting duration parameters.

What’s next: I’m not sure. Basically, it is already useful for me as-is. There’s a list of features I’m considering, but I also want to keep the codebase simple, with few dependencies, and easily deployable.

Appcelerator Titanium

A very TL;DR: of what I love about Appcelerator Titanium:

  • Appcelerator Titanium is a proprietary platform. You need to sign up, log in and possibly pay monthly fee. Sure they have a a Github repo. But for the Github version there are no builds, no documentation.
  • the “cross-platform” UI is very limited. Good luck building anything more sophisticated than a button, picture, a line of text or a colored rectangle (To be fair, building UI frameworks is hard. A cross-platform UI is harder, and also a stupid idea)
  • each application ships with V8 Javascript engine. On Android the apps ship 3 copies of V8 for different architectures. Minimum APK size for “hello world” application is around or close to 10MiB. An native “hello world” application would be 50KiB.
  • Shitty support for native platform features. Anything beyond the “look! we can display a button which runs alert(‘hello’)” is either painful or impossible without mending the Titanium Mobile SDK. You can file a feature request or bug report on Jira, where it will join thousands of others
  • it’s not doing a good job of hiding platform differences. It’s not even trying that hard. There are platform-specific modules, and for the missing functionality you get to write platform-specific code yourself!
  • No ecosystem. Want to use 3rd party service for cloud services, push messaging, image processing etc? 3rd party services usually have iOS and Android SDKs. Obviously they would have no Titanium SDK, but you’re welcome to roll your own Titanium module…

Hello Computer

I’ve been doing little bits of home improvement lately and I think this one is the nerdiest yet, by far.

Observation one: whenever I’m in apartment, I want the PC on and music going. Turning PC on is the first thing when I arrive, and turning it off is the last thing when I leave.

Observation two: right next to front door there is a light switch that does not appear to be hooked up with anything. And there’s a loose bit of cabling hanging from ceiling.

Enter home automation, idiot style.

First, I verified that the light switch controls the hanging bit of cable. Obviously it was intended for a light fixture that didn’t get installed. Next, I got an AC socket from home-depot type store and wired it up:

2015-01-02 15.24.09

Get the idea where this is going? Next up, let me present the “brains” of my home automation solution:


It’s an Android phone with busted screen that has been sitting around here and not doing anything useful. It’s an ideal device if you think about it: it has WIFI, it has internal battery, it has screen, and it is relatively easy to develop for. Crucially, it can sense whether it is being charged or not. Much easier to get going than rPi or Arduino-likes. And of course it is a complete overkill for what I’m going to use it for.

So, a phone hanging from ceiling… Looks bizarre for sure, but the ceiling is high there, and the spot is such that people would not normally look up and wouldn’t notice it.

Rest is simple, a custom Android app that sends Wake-on-LAN packets, and runs shutdown commands over SSH. (GitHub repo here). Now, as I come in, I can turn the light switch on and the PC turns on. Turn it off and PC shuts down gracefully.

I think next I should extend the app to have voice recognition. It would then wake the PC whenever it hears me saying “HELLO COMPUTER”.

A New Plan?

Remember that one time when I fried a 6850 graphics card? In short, I was switching to aftermarket cooler, and I had to fiddle with VRAM heatsinks so they would fit properly on my card. Something got shorted along the way and the card was done.

After which, I bought an Asus 6870 DirectCU. I was carefully choosing a card with good stock cooling so I wouldn’t need to touch it. Well, it is reasonably quiet, but by now all the other components have gotten quiet enough that the card is again the most noisy component. Hacking time again, eheheh?

Here are my options.

I could sell this card, and get a semi-passive Asus STRIX series card. The fans on these don’t even spin up at idle, which would be perfect. It’s a bit of investment, but, frankly, this is probably the sanest option to take.

I could do something about the stock cooler, stock heatsink, or the cooling profile. On reference-design cards you can use Radeon BIOS Editor to create a custom cooling profile and upload it to the card. The cooling profile could be something like “don’t spin fans until 65C, then ramp up, reaching full speed at 75C”. This would effectively convert a regular card into semi-passive card. The card would need to have a good heatsink to have thermal headroom, so it can run passive for useful amounts of time though.

Problem with custom desigs like my Asus DirectCU is they use custom cooling solution, and custom fan controller that you can only partially control from card’s BIOS. In particular, even at 0% duty cycle the fan will spin at 1300rpm. The fan controller is designed to range between 1300-3000rpm, which corresponds to 0%-100% duty cycle. What’s more, there is a feedback loop from tachometer to PWM signal. The fan controller will maintain fixed idle speed and will ramp up duty cycle if fan slows down. So, even if I replaced the stock fan with a slower-running, quieter fan, the controller would still spin it at 1300rpm.

Due to heatsink design, there is also not much improvement to be had from switching fans. It’s already about as good as it gets with this heatsink.


You cannot switch out from 8.5cm fan to a couple of silent 12cm fans because top of heatsink is not flat and the plastic shroud is also there for a good reason. The dual fans would probably do a worse job at cooling, and, yeah, they would still spin at 1300rpm.

An aside: an interesting observation was how the plastic shroud is attached. Its screws are located so inconveniently, you pretty much have to remove the whole heatsink to access them. Why make it difficult? I guess it is to stop people from removing the shroud, messing up airflow, overheating the card and then claiming warranty. It’s as if the card’s designers said: “there is no good reason to remove just the plastic. You would only be doing that if you’re also changing the whole heatsink”.

So you could change the whole heatsink but there’s no way I’m doing that, after what happened with 6850.

How about using motherboard’s chassis fan connectors for GPU? Well that might work, but there’s two potential issues. First, the GPU might get upset “dude where’s my fan” and throttle back its performance. Second, it would be a brittle setup, where some resident software polls GPU temperature and controls the chassis fan. This is a GPU meltdown waiting to happen.

Now, as I said, the rational choice is probably to splash out for a STRIX card and be done with it. But for one last idea, how about MitM-ing the GPU fan controller? It appears that Arduino style devices have the required hardware to read PWM signal (perhaps with a help of small resistor and capacitor, to convert to analog first). It can also output PWM signal for driving fan, and surely it can output pulses to imitate tacho. So there could be an Arduino sitting on the wire before GPU fan, listening on the trusty but way too conservative PWM signal from the card, then doing its own custom curve and reporting fake tacho numbers back to keep the card happy. Now that would be a hack!

GoPro & bike

Here’s the plan for kick-ass sports videos:

  • GoPro 3 Silver, chest mount, a bike
  • Shoot 1080p with ProTune
  • Video editing on Linux is lost cause, Windows all the way–
  • Join .mp4s into a single .mkv using Avidemux, no quality loss here!
  • Stabilize with VirtualDub / Deshaker
  • Bring into Vegas, chop into interesting bits and the rest, then remove the rest
  • Add music track
  • Move interesting bits around and tweak boundaries to semi-match beats
  • Brightness, levels, color correction, sharpness
  • Remove most of the wind noise with EQ
  • Bring volume up wherever I say anything, so there’s chance of hearing it
  • Render and youtube

Here’s my first attempt

I’ll be trying out other mounting options. Even after stabilization there’s plenty of jerkiness. There’s huge DIY hacking potential in how one mounts a GoPro.

I probably overdid colors a bit. And, come spring, there will be more interesting places to ride. Exciting times ahead!

Q: Why GoPro and not, say, Sony Action Cam?
A: Bitrate and flat color profile (ProTune). Sony’s in-camera stabilization is Very Nice, though.

Q: ProTune?
A: Shaky cycling videos need tons of bitrate. Also, sky is not blown out and can be fixed later

Q: Deshaker?
A: It’s slow as cow, but it’s good.

Q: Why stabilize all the footage and not just the fragments I’ll need?
A: Simplifies workflow, instead of fiddling with each separate clip I start with a single stabilized file

How I got a new graphics card

* rawwwwrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr: fan bearings on 6850 have worn out
* market research, acquisition of Accelero S1 Rev2
* screwdrivers out, case open, hammer time
* heatsink fit problem, like here, hacksaw and angle grinder saves the day
* bad temps, preapplied thermal compound on Accelero doesn’t cover all of the GPU
* bad temps, suspect worn out thermal pads under voltage regulators, make my own
* temps getting there, one last iteration, boom, my card now shorts motherboard

Time and money well spent…

* More market research, acquisition of Asus 6870 DirectCU
* With -30°C outside, might as well mine bitcoins…
* AMD Catalyst 12.1
* Order is important. On Fedora 16 at least, installing AMD APP SDK *after* Catalyst overwrites some files and OpenCL apps segfault happily
* Current mining settings that result in ~250MHash and nice, smooth desktop: -k phatk2 -u http://user:pass@host:port DEVICE=0 WORKSIZE=128 \

With higher aggression it goes higher–270MHash/sec and probably even further with more tuning, but desktop gets a bit jerky. Total power consumption is around 280W.

Measuring power consumption and doing motion tracking

Out of curiosity I bought a cheap Kill-A-Watt type device to see how much power each of our appliances draw. Some quick numbers:

  • Phone charger

    • plugged in wall, but not in use: 0.2W
    • charging: 6.5W
  • Logitech Z2300 speakers, 200W RMS specs say
    • turned off: 4W (gotcha!)
    • turned on and silent: 12W
    • playing music in normal listening volume: 20W
    • playing music pretty loud: 50-90W, fluctuates a lot
  • Desktop PC (Phenom II at 4GHz, XFX Radeon 6850 stock speed, SSD and HDD, 450W PSU)
    • Idle: 130W
    • Running poclbm (Bitcoin miner, uses GPU): 220W
    • Counter Strike 1.6: 150W
    • DiRT 3: 220W
    • Prime95 (CPU stress test) + FurMark (GPU stress test): 293W

So I thought it would be cool to measure a dozen more devices and appliances at different operating modes and make a video clip showing the measurements. I’d wander through rooms in fluid steadicam style and there’d be a number floating on top each device showing its power consumption.

Like many other things, I’ll probably never get around to create such video, but the floating numbers bit urged my interest, how one would do that? Seems the magic words are motion tracking. Possible with free tools too, Voodoo does motion tracking, Blender does rendering the letters. It took a couple hours to find my ways through these two. Some rough results already in: