I’ve had a Dell XPS 9550 since around February. It’s a fantastic laptop but there definitely have been teething pains with the Skylake processor and Linux, specifically related to power management.
Today I installed Linux kernel 4.8-rc5 and got a nice surprise vs 4.7.2 which I was running.
That’s quite a bit better than observed with 4.7.2 and far, far better than 4.5.X when I first got the laptop.
In the last while I’ve spent some time learning about Docker, Kubernetes and Google Container Engine. It’s a confusing world at first but there are enough tutorials to figure out it all out if you spend the time.
While doing this I wanted to create a simple micro-service using Python 3.5’s Asyncio features. This seemed like the perfect fit for a micro-service. To have a useful goal, I ported our code that synchronizes the NightShift application with Hubspot. This works fine but after having it running for a while I discovered that the initial structure I built hid the tracebacks within tasks until the program exited. Figuring out a high level pattern that addressed this took a lot longer than I thought would. To help spare others from pain, and to hopefully create a useful tutorial, I have created a Github repo called python-asyncio-kubernetes-template.
This little repo has all the files required to create a simple Python/Asyncio micro-service and run it on your own Kubernetes cluster or on Google Container Engine. The micro-service has an HTTP endpoint to receive healthchecks from Kubernetes and it properly responds to shutdown signals. Of course, it also immediately prints exceptions which was the original pain point.
The README for the project contains a simple tutorial that shows how to use this end to end.
Hopefully this saves others time. If you know of a better way to do anything I’ve done in this repo please get in touch or submit a PR. It would be great if this repo grew to become a definitive template for micro-services built with these technologies.
In the last while I’ve had to create several screencasts. After some experimentation I’ve found the following Gstreamer pipeline works well.
gst-launch-1.0 -e webmmux name=mux ! filesink location=test.webm \
pulsesrc ! queue ! audiorate ! audioconvert ! vorbisenc ! queue ! mux.audio_0 \
ximagesrc use-damage=false ! queue ! videorate ! videoconvert ! video/x-raw,framerate=5/1 ! vp8enc threads=2 ! queue ! mux.video_0
Couple of notes:
- use-damage=false is important if you are using a composited desktop (eg Gnome 3). It took a discussion with a dev on #gstreamer to figure this out.
- framerate=5/1 reduces the framerate to 5 per second. This greatly reduces the amount of video encoding required. My computer couldn’t keep up without setting this. The low framerate works quite well for a screecast and keeps the files small.
I’ve been using the Fish shell for a while now. Its auto-completion is so much better than Bash. You should try it.
I was looking at the Fish docs this morning and stumbled across this little gem.
Every configuration option in a program is a place where the program is too stupid to figure out for itself what the user really wants, and should be considered a failure of both the program and the programmer who implemented it.
From the Fish Design Documentation.
Below is a great Linux desktop security checklist from the Linux Foundation.
Linux 3.13 was just released. As always there are lots of interesting new features but two stand out to me: nftables and cls_bpf.
Nftables is the replacement for iptables. It offers a new syntax, looks easier to use and has a simpler kernel implementation through the use of a JITed BFP-like language instead of dedicated field matching code.
Cls_bpf is a new traffic classifier that makes use of BPF to match packets for traffic shaping purposes. This is made possible by the BPF JIT that was added to the kernel some time ago.
Additionally, the BPF JIT can now also be used as a security mechanism to filter which syscalls a given process can use.
The commonality to all of these is a small, simple, fast and flexible component in the kernel with the more complex details located in userspace – I really like this design pattern.
Nftables, the new firewall infrastructure designed to replace iptables in the Linux kernel has just been merged. If you are a Linux kernel packet geek this is pretty exciting stuff. Unlike iptables, which has kernel code to parse and classify all kinds of different traffic types, nftables relies on a small BPF like bytecode language. The userspace tools simply generate the bytecode and pass it to the kernel for execution allowing new protocols to be supported without kernel changes. This will eventually replace a lot of complex code in the kernel and has a conceptually beauty that I really like.
Below are a few links for those interested:
LWN.Net: Nftables a new packet filtering engine (2009)
LWN.Net: The return of nftables (2013)
My article on Packet Queueing in the Linux Kernel appeared in the July 2013 issue of Linux Journal. Now that a month has past, Linux Journal’s great copyright policy allows me to post the content. You can find the full article at the URL below.
Queueing in the Linux Network Stack
Some time ago I started writing a blog post to help myself better understand where packets can be queued within the Linux kernel. This relates to my long time interest in optimizing for latency and experimenting with the kernel’s QoS features. By the time I was ready to hit the publish button, the blog post was several thousand words long and I had gotten some nice feedback so I decided to submit it to Linux Journal instead. If you are a Linux Journal subscriber you can now find the article in the July 2013 issue which has a focus on Networking.