Continuous delivery implementations differ due to company testing culture
I just finished reading a post about continuous delivery at Outbrain. It’s quite interesting they made a trip similar to what we have done at RemoteX. However their setup is some what different than in the our. They use a tool called Glu to do deploy to the their different target environments and deliver RPM’s for their services.
At RemoteX we produce .exe files as well as a set of web services so I guess there will be significant differences when looking at the actual deployment.
What I find interesting is that they seem to go directly from the builds in TeamCity to starting deployment, where we at RemoteX have several steps after the commit stage to package and verify our release before its pushed out.
Outbrain also seems to be able to target specific environments to deploy to in a greater extent than we do at RemoteX. At RemoteX we instead categories our system installations to gradually deploy to all our installations.
This brings up something that seems to be common when reading Continuous Delivery war stories. They all agree (mostly) what what Continuous Delivery is all about, but they all implement it differently and have different areas where the solution is stronger or weaker.
The pattern seems to be that once the product goes out to customers there are different cultures at the companies that requires different solutions. The example of deploying to specific environments or deploy to categories of customers for example.
These cultural differences seem to all be focused around faith in the quality of the product. With in turn boils down to faith and investment in automated testing vs. manual testing.
In theory it would be possible to see what the culture at a company is like regarding to their faith in their own deliverables, by looking at their continuous delivery solution.