Skip to main content

Posts

Amazon Maintenance and your RDS Instances

I recently had the pleasure of Amazon telling me that they had to reboot all of my Postgres RDS instances to apply some security patches.

When using RDS you generally expect that Amazon is going to do something like this and I was at least happy that they told me about it and gave me the option to trigger it on a specific maintenance window or else on my own time (up to a drop dead date where they'd just do it for me)

One thing that you can't really know is what the impact of the operation is going to be. You know it's a downtime, but for how long?

My production instances, are of course, Multi-AZ but all of my non-production instances are not.

Fortunately, my non-production instances and my production instances both needed to get rebooted, so I could do some  up-front testing on the timing.

What I found was that the process takes about 10 to 15 minutes and, in this particular case, it was not impacted by database size.
Although it is impacted by the number of instances you…
Recent posts

Introducing MrTuner and RDSTune

Two simple gems to help tune your Postgres databases.

I'm a big fan of PgTune. I think that in many cases you can run PgTune and set-it-and-forget-it for your Postgres parameters. I like it so much that I often wish I had access to it in my code - especially when working with Puppet to provision new databases servers.

When I started looking into RDS Postgres a while back I realized that the default configuration for those instances was lacking and I really wished I could run PgTune on the RDS instances.

It was to solve those problems above that these two projects formed.

RDSTune will create a MrTuner-ized RDS Parameter Group MrTuner is a Ruby gem that follows in the sprit of PgTune if not directly in it's footsteps.
Both will run from the command line but, more importantly, they can be `required` by your ruby projects to allow you to access these values programmatically. 
Both Gems are available on rubygems and source, examples, configuration and docks available at their respec…

Dockerfile Golf (or optimizing the Docker build process)

I was working with a friend of mine on his startup Down For Whatever and he wanted to use Docker.

I created a docker farm for him and we're using Centurion for deployments, we signed up for a docker hub account to store images and started pushing code.
A few days later he emailed me saying that he wanted to switch to capistrano for code deployments instead of building a docker image each time because pushing the image to docker hub took too damn long. (upwards of 10 minutes for him at it's worst)
That felt wrong, and kind of dirty. To me, docker is about creating a bundle of code that you can deploy anywhere. Not about creating a bundle of infrastructure that then you can then deploy into.
It was also surprising because I had started off with a Dockerfile based on Brian Morearty's blog post about skipping the bundle install each time you build a docker image. So I didn't think I had a lot of optimization left available to me.

But once we got into Golfing around with th…

PostgreSQL Performance on Docker

While preparing my presentation on Postgres and Docker to the PDX Postgres User Group for the June meeting, I thought it would be cool to run Postgres on Docker through some rough benchmarks. What I ended up finding was fairly surprising.

The Setup For this test I used Digital Ocean's smallest droplet 512M RAM/1 CPU/20G SSD Disks. They have an awesome feature where you can chose the type of application you want to run on the droplet and they'll pre-configure it for you. They even have one for Docker, so that's the one I chose.
Here's a list of all of the versions of the software I used.

Host OS: Ubuntu 14.04 LTS Kernel: 3.11.0-12-generic Docker version 1.0.1, build 990021a
PostgreSQL Version: 9.3.4
I used pgtune to generate my configuration, i used these same values across all tests: maintenance_work_mem = 30MB 
checkpoint_completion_target = 0.7
effective_cache_size = 352MB 
work_mem = 2304kB 
wal_buffers = 4MB 
checkpoint_segments = 8 
shared_buffers = 120MB 
max_connection…

Tune your Postgres RDS instance via parameter groups

It seems like the folks at Amazon set some strange defaults for their RDS Postgres instances and they make it pretty darn difficult to allow for dynamically sized instance.

You tune your Postgres RDS instance via Parameter Groups. In the parameter group configuration is all of the normal PG tuning parameters from your postgresql.conf.

They provide you with a variable: {DBInstanceClassMemory} which returns the memory in bytes available to the instance, and you can use that in some limited ways to dynamically set parameters based on the instance type you chose for your RDS database. There may be more of these variables but I wasn't able to find them.

One commenter pointed out that DBInstanceClassMemory is possibly not the entire memory of the machine. So for example: DBInstanceClassMemory on an m1.xlarge would not be 16106127360 (16GB) but instead they lower it to take into account the memory allocated to the OS. I hope that this will be changed in the futures since most postgre…