Imagine having a glitch in your code that caused an entity in the production environment to be in an inconsistent state that cannot be easily fixed by existing code. Luckily a couple of lines of code will fix the issue, if only there was a quick way to ship that code to production making sure it’s targeted to that specific entity and that can be safely thrown away as soon as the issue is fixed.
In this post we’ll explore how to do this with help of Docker and Kubernetes.
This is another short tale about lazy evaluation but this time it is about a lack thereof, a moment of brief despair and confusion and how the underlying simplicity of a purely functional language like Haskell comes to the rescue.
Graphs are a fundamental data structure in computer science because a lot of problems can be modelled with them. Graph traversal, shortest path between two vertices, minimum spanning trees are all well-known algorithms and there is plenty of literature available. This applies to imperative languages but is it the same for functional languages? My first-hand experience is that this is not quite the case and answering a seemingly simple question like “how should I implement a graph algorithm in a functional programming language?” ends up being unexpectedly challenging.
Providing an application as a Docker executable image is a handy way to distribute an application: no need to install toolchains, frameworks and dependencies. One can just pull a Docker image and run it. It’s really that simple. Docker images can grow wildly in size because they need to install all the dependecies needed to run the application: this as a user can be quite annoying. Imagine you want to use a tiny application that solves a very specific problem and you have to download a 2GB Docker image! It’s undesirable. And it’s actually not needed: why not shipping only the executable in a very compact Docker image? How can this be achieved if the application is built in Haskell?
Docker containers can communicate with each other either using the deprecated links machinery or using user-defined networks. The latter also is the way to go when using docker-compose
since a user-defined network is created by default (at least in recent versions).
One of the last things left to figure out when I was about to lauch this website was finding a workflow to nicely deploy it. I was using Jekyll + Github Pages for my old website and it was working well enough for me so I didn’t want to radically change the way I was doing things. On the other hand I didn’t update my old website in a while and I am new to Hakyll so I had to figure out if I could keep a similar workflow. I ended up spending a few hours figuring out a solution I was happy with and the following is a description of my present workflow and how I got to it.
Lazy IO is so tricky to get right and has some intrinsic limitations that the usual recommendation is to simply avoid it. On the other hand sometimes it’s not desirable (or even possible) to use strict IO, mostly for memory efficiency reasons. This is the kind of problems that streaming libraries like conduit or pipes are designed to solve. In this post I want to show how I refactored a piece of code that uses lazy IO to use the conduit library (for those not familiar with it, please read this conduit tutorial first).
Lazy evaluation sometimes makes it trickier to really understand how a piece of code for folks used to languages with strict semantics (as I am). Sometimes introducing strictness is necessary to avoid space leaks and to make memory allocations more predictable in certain parts of our code. The usual suggestion is to “carefully sprinkle strict evaluation” in our code; one of the classic examples of memory leak is using foldl
to sum a list of ints, with the result that instead of returning a result using constant space, it ends up taking an outrageous amount of memory before returning a result because thunks pile up (this behaviour is known as space leak). Most of the times I personally find it tricky to add strictness to a piece of Haskell code, so I’d like to share my latest experience doing that.
We’ll be using the Bloom filter implemented in chapter 26 of Real World Haskell as an example, the version contained in the book creates the filter lazily: our goal will be to create a strict version of that particular piece of code.
Recently I invested a decent amount of time in making our functional tests less clunky, especially when there are async computations involved. We started using Espresso a few days after it was released and never looked back. In this blog post I’d like to focus on how you can tell Espresso to wait for an async computation to finish before performing any actions on a View
, and a few gotchas I learned.
In the last few weeks I’ve been playing around with WebView
s a lot and found out a few interesting differences (not necesserely documented) between the legacy implementation up to Jelly Bean and the brand new Chromium-based one in KitKat. If you don’t know what I’m talking about - after all, the API has mostly remained untouched apart some nice additions, more about this in a bit - a couple of useful links by the Google folks are this and this.
It’s quite common to invite users to rate an app on Google Play at some point, on one hand it’s good to know that your users are happy and on the other it’s a good way to attract new users. It’s definitely not the only variable in the equation, but I can definitely say that user satisfaction is inversely proportional to the amount of crashes. But bugs are unfortunately something we have to expect as developers, even after testing our apps thoroughly. One thing we probably don’t want to do, is to ask a user to rate our app just after a crash since we can be reasonably sure that he’s not going to be too happy about it. How can we make sure that this doesn’t happen?
Some months ago we released the first stable version of ignition. In this post I explaned how the ignition-location library works and how it’s possible to include it in an already existing application. This time I’ll explain how we integrated it in the Qype Android application.
Today as part of our Fun Friday we released version 0.1 of ignition, an Android library that should make your life as an Android developer much less painful. What I’d like to write about here is the module I focused on, that is the ignition-location module. Personally I started working with Android almost 3 years ago, in Android terms that means Android v1.5 - Cupcake - API level 3. It wasn’t easy to understand the framework back then, lots of documentation was missing and I spent hours digging in the source code to understand of things were supposed to work. A lot has changed since 1.5, and developing Android applications has become way easier with better documented APIs and better tools.