DevOps – Superset Blog https://blog.supersetinc.com The Superset Inc. company blog Sun, 08 Sep 2019 07:04:42 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.25 On microservices and distributed architectures https://blog.supersetinc.com/2018/09/08/microservices-distributed-architectures/ Sat, 08 Sep 2018 14:36:53 +0000 https://blog.supersetinc.com/?p=609 Like the boiling frog, we often fail to appreciate just how significantly the infrastructure upon which we rely as developers has improved over the last decade. When I began working with the Rails framework in 2010, everything from the hardware we used for local development, to the infrastructure upon which we tested and deployed, was positively anemic […]

The post On microservices and distributed architectures appeared first on Superset Blog.

]]>
Like the boiling frog, we often fail to appreciate just how significantly the infrastructure upon which we rely as developers has improved over the last decade. When I began working with the Rails framework in 2010, everything from the hardware we used for local development, to the infrastructure upon which we tested and deployed, was positively anemic by today’s standard.

My personal laptop, reasonably high-end for the time, had a 5400 RPM spinning disk and 2 GB of RAM. SSDs were exotic, even on servers. Nowadays, you can get bare metal servers with 512gb-1tb of RAM, 2x multi-core CPUs and terabytes of fast SSD storage for a price that is perfectly reasonable for even small companies. Similarly, you can easily and cheaply launch fleets of high-spec virtual servers with providers like Amazon Web Services and DigitalOcean at minutes’ notice.

In many ways, it seems to me that we are often basing architectural decisions on imagined constraints. In my experience, a decision to embrace a microservices architecture should not follow primarily from concerns about scalability.

Typically the burden and overhead of managing several services across several environments (development, testing, QA, production, etc) is a huge multiple of that of managing a more monolithic codebase. Furthermore, scaling a monolithic application, within most practical bounds, is actually often simpler and cheaper than scaling a more distributed app.

From a technical perspective (speaking in this instance of web apps) a monolithic application can scale very naturally. The application tier can scale horizontally to an almost infinite degree by adding more application servers. Particularly high-traffic pages with largely static content can easily be placed behind a reverse proxy cache like Varnish (or a commercially-hosted sibling like Fastly). High-traffic pages with more dynamic content can still have their performance dramatically improved with strategies like fragment caching (using a memory store like Redis or Memcached). Relational databases can scale to immense capacity either in hosted/managed forms (such as Amazon RDS) or hosted on your own hardware. Master-Slave replication schemes can allow database reads to scale in a horizontal manner similar to scaling the application tier. Only extremely write-heavy apps present any significant challenges in this area, and even these scenarios now have a multitude of purpose-built solutions such as Cassandra and Citus (this is also not something that will be overcome any more easily with a microservices solution).

So when should you adopt microservices solutions? To me there are two especially compelling scenarios. One is what I would call the “service bridge” scenario. This would be where you have a niche feature that has a significantly different traffic profile to your larger app and, more importantly, would introduce extremely awkward dependencies to your application tier.

A good example of this might be something like IP geolocation, which could require data sets of hundreds of megabytes or more (assuming something like the Maxmind’s binary data files) that you may not want to shoehorn into your primary application (so as not to bloat your application server). Such a niche dependency might be better implemented as a microservice (though I would argue you would probably be better off delegating to a hosted provider with an API).

Microservices architectures are also well-suited in circumstances where you have a very large organization with many domain-focused teams that would benefit from a very high degree of autonomy. One of the organizations most visibly advocating for and implementing service oriented architectures early on was Amazon (as  wonderfully documented by Steve Yegge in his famous Google Platforms Rant [archive link]). It’s arguable that this vision of service oriented architecture (SOA) is more along the lines of having multiple large, monolithic applications with distinct teams and some data shared, rather than the common understanding of microservices (which is more akin to single applications composed of several small services).

When adopting microservices, be mindful of the unique challenges of the architecture, and have a plan to address them. These should not be incidental concerns but a primary focus from the outset if your team is to thrive. Things such as bootstrapping the development environment and having cohesive QA and versioning practices can be challenging with a microservices architecture. So too can logging and tracing. Many (especially in the context of smaller organizations) take an ad-hoc approach to these issues because they can still manage to make the system function, but oversights of this nature can become serious liabilities at scale.

The critical thing that I hope to convey is that microservices should not be adopted as a default solution for the problem of scaling an application. They can be a great fit for scaling teams and organizations, as well as for wrapping up functionality that it is particularly impractical to fit within your primary application’s deployment. The matter of scaling an application can be addressed extremely effectively with a monolithic codebase and traditional horizontal scale-out methods.

The post On microservices and distributed architectures appeared first on Superset Blog.

]]>
High-performance logging from Nginx to Postgres with Rsyslog https://blog.supersetinc.com/2018/04/09/high-performance-logging-nginx-postgres-using-rsyslog/ https://blog.supersetinc.com/2018/04/09/high-performance-logging-nginx-postgres-using-rsyslog/#comments Mon, 09 Apr 2018 20:37:28 +0000 https://blog.supersetinc.com/?p=513 While there are excellent purpose-built solutions for general log storage and search, such as Librato and the ELK Stack, there are sometimes reasons to write log data directly to Postgres. A good example (and one we have recent experience of at Superset) is storing access log data to present to users for analytics purposes. If the number […]

The post High-performance logging from Nginx to Postgres with Rsyslog appeared first on Superset Blog.

]]>
While there are excellent purpose-built solutions for general log storage and search, such as Librato and the ELK Stack, there are sometimes reasons to write log data directly to Postgres. A good example (and one we have recent experience of at Superset) is storing access log data to present to users for analytics purposes. If the number of records is not explosive, you can benefit greatly from keeping such data closer to your primary application schema in PostgreSQL (even scaling it out to a separate physical node if need be by using something like Foreign Data Wrappers or the multi-database faculties of your application framework).

For our own application, we have a few geographically-distributed Nginx hosts functioning as “dumb” edge servers in a CDN. These Nginx hosts proxy back to object storage to serve up large media files to users. For each file access we want to record analytics information such as IP, user agent, and a client identifier (via the Nginx userd module, to differentiate unique users to the extent that it is possible).

For the purposes of this article we will go through the manual steps of configuration (on Ubuntu Server) though you should absolutely be using a configuration management tool such as Ansible to automate these steps in a production, as we have.

The dependencies for this logging configuration are nginx-extras (which includes the userid module for analytics) and rsyslog-pgsql (the package for the Rsyslog Postgres output module, which is not part of the default Rsyslog install). You can install these with apt (either manually or via Ansible’s apt module):

sudo apt-get install nginx-extras rsyslog-pgsql

Ubuntu should have a top-level Rsyslog configuration file at /etc/rsyslog.conf which should end with the line:

# ...
$IncludeConfig /etc/rsyslog.d/*.conf

This instructs the Rsyslog daemon to pull in any configuration files contained in the directory /etc/rsyslog.d when it loads. We will use this to set up a special configuration to pull-in and forward our formatted nginx logs momentarily. First, let’s configure nginx to log a json payload to a unique log file.

Ubuntu’s standard nginx configuration pulls in per-site config files from the /etc/nginx/sites-available (and assumes you have sym-linked the configurations for sites you want to go live to /etc/nginx/sites-enabled/). For this example, we’ll assume a configuration for mysite.com in /etc/nginx/sites-available/mysite.com.conf:

# in /etc/nginx/sites-available/yoursite.com:

log_format json_combined '{"time_local": "$time_local", '
   '"path": "$request_uri", '   
   '"ip": "$remote_addr", '
   '"time": "$time_iso8601", '
   '"user_agent": "$http_user_agent", '
   '"user_id_got": "$uid_got", '
   '"user_id_set": "$uid_set", '
   '"remote_user": "$remote_user", '
   '"request": "$request", '
   '"status": "$status", '
   '"body_bytes_sent": "$body_bytes_sent", '
   '"request_time": "$request_time", '
   '"http_referrer": "$http_referer" }';

server {    
  listen 80;
  # + SSL configuration...

  # Optional: Nginx userid module, useful for analytics.
  # (see http://nginx.org/en/docs/http/ngx_http_userid_module.html)
  userid on;
  userid_name uid;
  userid_expires 365d;

  server_name yoursite.com;
  # any additional server-level configuration such as site root, etc...
 
  location / {
    access_log /var/log/nginx/yoursite.com/access.log json_combined;
  } 
  # You will probably want to add some gzip, cache, etc standard header rules for performance...
}

And load the Nginx configuration with:

sudo service nginx reload

We now have Nginx writing a one-off access log with a json-formatted payload to /var/log/nginx/yoursite.com/access.log. We can configure Rsyslog to read this log file using the imfile module. In the past imfile defaulted to a polling mode but today defaults to filesystem events and is very performant and effectively real-time. With the imfile module reading the log file on Rsyslog’s behalf, we can then forward the log file data to Postgres using the Rsyslog ompgsql (PostgreSQL Database Output) module. The combined configuration is as follows:

# Load the imfile input module
module(load="imfile") # Load the imfile input module

input(type="imfile"
      File="/var/log/nginx/yoursite.com/access.log"
      Tag="yoursite:")

# Load the ompgsql output module
module(load="ompgsql")

# Define a template for row insertion of your data.
# The template below assumes you have a table called
# "access_log" and are inserting columns named 
# "log_line" (with the log payload) and "created_at" (with the timestamp).
template(name="sql-syslog" type="list" option.sql="on") {
  constant(value="INSERT INTO access_log (log_line, created_at) values ('")
  property(name="msg")
  constant(value="','")
  property(name="timereported" dateformat="pgsql" date.inUTC="on")
  constant(value="')")
}      

# The output "action". This line instructs rsyslog
# to check if the log line is tagged "yoursite:" (a tag
# which we set with the imfile module configuration above)
# and if so to use the sql-syslog template we defined
# above to insert it into Postgres.
if( $syslogtag == 'yoursite:')  then {
  action(type="ompgsql" server="{{ postgres_host }}"
        user="{{ postgres_user }}" 
        pass="{{ postgres_password }}"
        db="{{ postgres_db_name }}"
        template="sql-syslog"
        queue.type="linkedList")
}

You will want to name this file something like /etc/rsyslog.d/51-yoursite.conf, since Rsyslog loads config files in alphabetical order and on Ubuntu has a default configuration file in the same directory called 50-default.conf. It probably goes without saying but the ompgsql “action” line in the configuration above is using mock templatized credentials (I can recommend Ansible Vault for managing/templating credentials such as these in production). I should also note that as Rsyslog is a very long-lived project, it supports several different configuration file formats. The example above is using the “advanced” (a.k.a. “RainerScript”) format because I find this to be by far the most readable. Once you have saved the above log file, you will need to restart the Rsyslog daemon for it to take effect:

sudo service rsyslog restart

The above configuration should be pretty performant as the “linkedList” queue.type argument supplied to the ompgsql action is instructing Rsyslog to buffer/batch its writes to Postgres. You can read about the performance tweaking that is available for ompgsql in an excellent article, “Handling a massive syslog database insert rate with Rsyslog“, which was written by Rainer Gerhards himself (primary author of Rsyslog).

The post High-performance logging from Nginx to Postgres with Rsyslog appeared first on Superset Blog.

]]>
https://blog.supersetinc.com/2018/04/09/high-performance-logging-nginx-postgres-using-rsyslog/feed/ 1
Dedicated vs. Cloud vs. VPS vs. PaaS – a value comparison https://blog.supersetinc.com/2017/12/31/dedicated-vs-cloud-vs-vps-vs-paas-value-comparison/ Sun, 31 Dec 2017 03:59:03 +0000 http://blog.supersetinc.com/?p=151 I find myself increasingly evangelizing for awareness of infrastructure and ops practices because I believe that such awareness (or lack thereof) has cascading effects for application architecture and, ultimately, company success. Understanding the relative value of platforms can keep you on a path of rapid execution. Misunderstanding or neglecting it can get you into very […]

The post Dedicated vs. Cloud vs. VPS vs. PaaS – a value comparison appeared first on Superset Blog.

]]>
I find myself increasingly evangelizing for awareness of infrastructure and ops practices because I believe that such awareness (or lack thereof) has cascading effects for application architecture and, ultimately, company success. Understanding the relative value of platforms can keep you on a path of rapid execution. Misunderstanding or neglecting it can get you into very dire situations.

I see many teams break out their applications into services prematurely, with immense consequence in terms of cognitive overhead and loss of velocity. Typically the decision is a consequence of a perception that they have hit a performance ceiling, when in fact they are still on some relatively weakly-provisioned PaaS.

I want to do a detailed value comparison to aid others in making informed infrastructure decisions.

For the purposes of this comparison, we will consider two scenarios: what I would call a mid/moderate-scale app, with 60GB of data in persistence storage and a requirement to handle 200 requests/second, and what I will call a higher-scale app, with 1 TB of storage in use and a requirement to handle 3,000 requests/second.

We will also assume a relatively monolithic 12-factor application architecture, with an application tier, an application caching tier and a persistence tier.

There is necessarily a lot of variability in the performance profile and resource usage of applications. For the purpose of this comparison, we will assume a Ruby on Rails application running atop the Passenger web application server. We will assume each application process consumes 300MB, and we will use at most 88% of the available memory on each application server (which means we would fit 3 worker processes on an application server with 1024MB of RAM). The mid-scale application will therefore require 10 application processes (20x reqs/second/process x 10 processes = 200 requests/second), and the high-scale application will require 150 application processes.

We will assume the application is tuned well enough to average 50ms server times. This would mean that we can assume each server process we have active can roughly accommodate 20 reqs/second.

This makes the cost estimates in this comparison optimistic, as it does not account for sub-optimal request routing/load-balancing (particularly relevant to Heroku) nor the fact that very few applications will have a performance profile that reliably keeps requests at a 50ms/second average server time.

Category 1: PaaS

Note: based on our resource utilization profile outlined above, we assume that a Standard 2x Dyno can accommodate 3 worker processes, a Performance M Dyno can accommodate 7 processes (in 88% = 2.2GB of its total 2.5GB of memory) and a Performance L Dyno can accommodate 41 processes.

Mid-scale configuration High-scale configuration
Application Tier
  • $200-$1,250/mo
    4 Standard 2x Dynos ($200/mo)
    2x Performance M Dynos ($500/mo)
    1x Performance L Dyno ($500/mo)
Application Tier
  • $6,500-$17,750/mo
    22x Performance M ($5,550/mo)
    4x Performance L ($2,000/mo)
Persistence Tier
  • $50-$200/mo
    Heroku Postgres Standard 0 Plan ($50/mo)
    Heroku Postgres Standard 2 Plan ($200/mo)
    Heroku Postgres Premium 0 Plan ($200/mo)
Persistence Tier
  • $2,000-$3,500/mo
    Heroku Postgres Standard 6 Plan ($2000/mo)
    Heroku Postgres Standard 7 Plan ($3500/mo)
    Heroku Postgres Premium 6 Plan ($3500/mo)
Caching Tier
  • $30-$60/mo
    Heroku Redis Premium 1 Plan ($30/mo)
    Heroku Redis Premium 2 Plan ($60/mo)
Caching Tier
  • $200-$750/mo
    Heroku Redis Premium 5 Plan ($200/mo)
    Heroku Redis Premium 7 Plan ($750/mo)
Total Cost
$280 – $1,510/mo.
Total Cost
$8,700 – $22,000/mo.

Category 2: VPS

For our VPS survey we will have a look at two of the leading VPS providers: Linode and DigitalOcean. Linode’s pricing is quite a lot better than DigitalOcean’s, but they have a public history of DDoS episodes that have taken major sites offline and an arguably less rich API and admin experience. I still consider Linode highly viable and think it is worth taking a close look if at both providers’ offerings if you are considering moving your infrastructure to VPS.

Linode

Mid-scale configuration High-scale configuration
Application Tier
  • $20-80/mo
    1x 4GB Linode ($20/mo)
    2x 4GB Linode ($40/mo)
    1x 8GB Linode ($40/mo)
    2x 8GB Linode ($80/mo)
Application Tier
  • $240/mo
    4x 16GB Linode ($240/mo)
    2x 32GB Linode ($240/mo)
Persistence Tier
  • $40-$80/mo
    1x 8GB Linode ($40/mo)
    1x 12GB Linode ($80/mo)
Persistence Tier
  • $160-480/mo
    1x Linode 24GB ($160/mo)
    1x Linode 64GB ($480/mo)
Caching Tier
  • $20/mo
    1x 4GB Linode ($20/mo)
Caching Tier
  • $60-$120/mo
    1x 16GB High-memory Linode ($60/mo)
    2x 8GB Linode ($80/mo)
    2x 16GB High-memory Linode ($120/mo)
Total Cost
$80 – $180/mo.
Total Cost
$460 – $840/mo.

 

DigitalOcean

Mid-scale configuration High-scale configuration
Application Tier
  • $20-80/mo
    1x 4GB Droplet ($20/mo)
    2x 4GB Droplet ($40/mo)
    1x 8GB Droplet ($40/mo)
    2x 8GB Droplet ($80/mo)
Application Tier
  • $320/mo
    4x 16GB Droplet ($320/mo)
    2x 32GB Droplet ($320/mo)
Persistence Tier
  • $40-$80/mo
    1x 8GB Droplet ($40/mo)
    1x 16GB Droplet ($80/mo)
Persistence Tier
  • $180-$260/mo
    1x 16GB Droplet + 1TB Block Storage ($180/mo)
    1x 32GB High-CPU Droplet + 1TB Block Storage ($260/mo)
Caching Tier
  • $20/mo
    1x 4GB Droplet ($20/mo)
Caching Tier
  • $40-$80/mo
    1x 8 GB Droplet ($40/mo)
    2x 8 GB Droplet ($80/mo)
    1x 16GB Memory-optimized Droplet ($80/mo)
Total Cost
$80 – $180/mo.
Total Cost
$540 – $660/mo.

Category 3: Cloud

For Cloud, we will consider configurations on Amazon Web Services. We will keep things roughly aligned with our VPS specs. It is important to note that opting for Reserved pricing can realistically lower your AWS costs as much as 40-45%, but you must have a rough idea of what your resource utilization will be at least one year forward, so we will consider both reserved and On-Demand pricing here. We have used 40% as a rough benchmark since the precise calculations will be unique to your specific service selection and whether you are prepaying or not.

Mid-scale configuration High-scale configuration
Application Tier
  • ~$88-~$156/mo
    1x t2.large instance (8GB memory)($67.93/mo)
    2x t2.medium instances (4GB memory) ($67.94/mo)
    2x t2.large instance (8GB memory)($135.86/mo)
    50GB General Purpose SSD EBS volume + snapshot storage (~$20/mo)
Application Tier
  • ~$563/mo
    4x t2.xlarge (16GB) instances ($543.44/mo)
    2x t2.2xlarge (32GB) instances ($543.44/mo)
    50GB General Purpose SSD EBS volume + snapshot storage (~$20/mo)
Persistence Tier
  • ~$148/mo
    db.m4.large (8GB) RDS PostgreSQL instance with 50GB storage + 100 GB backup storage ($148.48/mo)
Persistence Tier
  • ~$837/mo
    1x db.m4.2xlarge (32GB) RDS PostgreSQL instance with 1TB storage (+ 2TB backup storage) ($837.06/mo)
Caching Tier
  • ~$66-~$133/mo
    1x cache.m3.medium (2.78GB) Elasticache (Redis) node ($65.88/mo)
    1x cache.m3.large (6.05GB) Elasticache (Redis) node ($133.23/mo)
Caching Tier
  • ~$266.45-~$532.90/mo
    1x cache.m3.xlarge (13.3GB) Elasticache (Redis) node High-memory Linode ($266.45/mo)
    2x cache.m3.xlarge (13.3GB) Elasticache (Redis) node High-memory Linode ($532.90/mo)
Total Cost
~$322 – $375/mo.
With reserved pricing
~$193 – ~$225
Total Cost
~$1,686 – $1,953/mo.
With reserved pricing
~$1,012 – $1,172/mo.

* note: we have not been very specific in calculations concerning block storage (EBS) costs at the application tier. The requirements at this tier (if you are using block storage at all) are likely to be insubstantial (~50GB). Opting for provisioned IOPS may raise your costs into the low $100s. If relying on General Purpose SSD + ephemeral storage at the instance level you will likely pay on the order of $20 or less as assumed here.

Category 4: Dedicated

It is not wholly straightforward to price a dedicated configuration as dedicated servers can be so generously provisioned that at moderate scale (such as our “mid-scale” application) it often makes sense to run all services on a single host. “High-availability” multi-host configurations are also possible and appropriate at larger scales or in contexts that require high availability. We will assume a high-availability, multi-host configuration for our “high-scale” application and a single-host configuration for the “mid-scale” application. We will consider both the budget host OVH (good value but spotty reputation for support and even uptime) and the slightly costlier Liquid Web (US-based managed host).

OVH

OVH offers good value dedicated hosting in several datacenters around the world. Their BHS/Quebec location is most relevant for American companies. There are some big caveat to OVH’s dedicated offering and it may not suit all applications. They have a history of leaving customers relatively on their own to sort out everything short of hardware failure. I also once firsthand experienced an episode of hours-long downtime due to a road accident that severed the datacenter’s fiber line (it is worrying that there was neither better fortification nor sufficient redundancy to keep hosts online). This one episode is the only major one I experienced however and it is well known that AWS services also go down, sometimes for similar periods of time, and sometimes even across all availability zones (such as memorably occurred with S3 in 2016).

Mid-scale configuration High-scale configuration
Application Tier
  • $89.99/mo
    1x HOST-32L, Xeon D-1520 (4 core/8 thread), 32GB RAM with
    2x480GB SSD ($89.99/mo)
Application Tier
  • $179.98-$383.98/mo
    2x HOST-32L, Xeon D-1520 (4 core/8 thread), 32GB RAM with 2x480GB SSD ($179.98/mo)
    2x EG-64, Xeon D-1520 (4 core/8 thread), 64GB ECC RAM with 2x480GB NVMe SSD ($383.98/mo)
Persistence Tier
  • $0/mo
    Utilize same host
Persistence Tier
  • $89.99-$191.99/mo
    1x HOST-32L, Xeon D-1520 (4 core/8 thread), 32GB RAM with 2x480GB SSD ($89.99/mo)
    1x EG-64, Xeon D-1520 (4 core/8 thread), 64GB ECC RAM with 2x480GB NVMe SSD ($191.99/mo)
Caching Tier
  • $0/mo
    Utilize same host
Caching Tier
  • $89.99-$191.99/mo
    1x HOST-32L, Xeon D-1520 (4 core/8 thread), 32GB RAM with 2x480GB SSD ($89.99/mo)
    1x EG-64, Xeon D-1520 (4 core/8 thread), 64GB ECC RAM with 2x480GB NVMe SSD ($191.99/mo)
Total Cost
$89.99/mo.
Total Cost
$359.96-$767.96/mo.

Liquid Web

Liquid Web offers managed hosting of dedicated servers. I include them in this comparison to contrast OVH since OVH is hands-off in a degree that may not align with the requirements of some businesses or teams. You pay a premium for a hands-on managed hosting solution but value is still good despite this.

Mid-scale configuration High-scale configuration
Application Tier
  • $399/mo
    1x XEON E5-2620 v4 server, 32 GB RAM with 2x480GB SSD ($399/mo)
Application Tier
  • $798/mo
    2x XEON E5-2620 v4 server, 32 GB RAM with 2x480GB SSD ($798/mo)
Persistence Tier
  • $0/mo
    Utilize same host
Persistence Tier
  • $399/mo
    1x XEON E5-2620 v4 server, 32 GB RAM with 2x480GB SSD ($399/mo)
Caching Tier
  • $0/mo
    Utilize same host
Caching Tier
  • $399/mo
    1x XEON E5-2620 v4 server, 32 GB RAM with 2x480GB SSD ($399/mo)
Total Cost
$399/mo.
Total Cost
$1596/mo.

The Wildcard: Colocation

At a certain scale you may find your best fit to be none of the discussed solutions but rather colocation. With colocation you rent space in a data center (typically by the quarter, half or full-cabinet) and pay for power and bandwidth, supplying your own hardware. This means paying hardware costs upfront and then paying comparatively quite a lot less in your ongoing hosting costs. There are also potential accounting advantages to be realized with colocation (as you own the hardware, you can write down its depreciation). You will need to have either a team or team members who are experienced in managing physical servers, and colocation is likely not a good fit for all but very high-scale projects. If you want to read a good overview of a colocation deployment in practice, I recommend having a look at Nick Carver’s article about Stack Overflow’s infrastructure.

Conclusion

I hope this article provided a rough overview of the comparative costs of various paths you might follow with your infrastructure. Be aware that it is neither exhaustive nor conclusive. There is a lot of variability when it comes to architectures and needs, and the numbers used should be considered only a rough approximation of value. It should also be understood that in order to utilize non-PaaS infrastructure solutions well, ergonomic tooling and processes (for things such as bootstrapping new environments, database backup and restore, deployment, etc.) will be required. We have posted elsewhere on this subject (see: “How to mess up DevOps: working at the wrong level of abstraction“) so please refer to those articles or elsewhere on the web for info about infrastructure management and DevOps tooling.

We provide a matrix below to summarize the data from this article:

Heroku AWS LiquidWeb DigitalOcean Linode OVH
Mid-Scale
Application
$530 – $1,510 ~$322 – ~$375
~$193 – ~$225 *
$399 $80 – $180 $80 – $180 $89.99
High-Scale
Application
$8,700 – $22,000 ~$1,686 – ~$1,953
~$1,012 – ~$1,172 *
$1596 $540 – $660 $460 – $840 $359.96-$767.96
* indicates reserved pricing
806 x 6844

The post Dedicated vs. Cloud vs. VPS vs. PaaS – a value comparison appeared first on Superset Blog.

]]>
How to mess up DevOps: working at the wrong level of abstraction https://blog.supersetinc.com/2017/12/23/mess-devops-wrong-level-abstraction/ https://blog.supersetinc.com/2017/12/23/mess-devops-wrong-level-abstraction/#comments Sat, 23 Dec 2017 10:16:55 +0000 http://blog.supersetinc.com/?p=102 There is a story I’ve seen unfold enough times to find disappointing: A tech company gets its product off the ground with a small handful of developers and a user-friendly fully hosted Platform-as-a-Service (PaaS) solution like Heroku. The company’s product is a success. A huge one! The company raises money, they scale the team, they […]

The post How to mess up DevOps: working at the wrong level of abstraction appeared first on Superset Blog.

]]>
There is a story I’ve seen unfold enough times to find disappointing:

A tech company gets its product off the ground with a small handful of developers and a user-friendly fully hosted Platform-as-a-Service (PaaS) solution like Heroku.

The company’s product is a success. A huge one! The company raises money, they scale the team, they iterate. One thing that doesn’t change is the PaaS. It’s working for them. Maybe not as well as they’d like but well enough to keep up with the roadmap.

At some point, costs get way out of hand. The once $1k/month bill has exploded to $40k/month. On top of this, developers are sick of hacking around arbitrary constraints of the PaaS. They learn of the dramatically better performance they can achieve at lower cost if they take greater ownership of their infrastructure.

They engage consultants. Often dubious ones.

The consultants come in with fantastic promises and build with buzzword-y tools. Projects drag on for months. Maybe years. If anything does see the light of day from the consultants’ efforts, it is so bespoke and complicated that simple tasks require several arcane commands that developers can never keep in their heads.

In most cases, I believe that his scenario is a consequence of fundamentally misunderstanding the function of DevOps, and of working at the wrong level of abstraction.

DevOps is about building user interfaces for developers.

If consultants are spending 90% of their time working out lower-level nuances of infrastructure orchestration then they have failed.

Just as the Ruby on Rails framework freed developers from having to make decisions about mundane implementation details that are universal to all web apps, various DevOps/enterprise PaaS solutions free you from having to architect from scratch basic and universal functionality like access management, log shipping, and exposing environment variables to applications.

The Rancher admin interface

These solutions include, but are not limited to:

Somewhere in the middle are also hosted platforms like Google App Engine, which give you a PaaS like experience (“app” abstractions, CLI, etc) but at better value and with fewer resource constraints than offerings like Heroku.

Cloud66 admin UI

Working with any such solution is a vastly different experience than working with low-level infrastructure automation tools like Chef, Ansible, the increasingly popular Terraform or any bespoke CLIs built around tools of this nature.

I find that an instructive litmus test of whether DevOps is delivering for a team is the ease with which a developer (not an infrastructure specialist) can “fork” (copy) an existing app and deploy to it. With high-level tools like OpenShift or Cloud66 this should be a relatively intuitive and quick task. With more bespoke tools it may not be something the developer is ever able to achieve by themselves.

Fundamentally, if your developers cannot work comfortably with your DevOps solution, the solution is a liability. I hope that this article has shed light on the critical distinction between infrastructure automation and higher-level PaaS, and that it saves organizations from going too far down the road of bespoke DevOps solutions where they may not be truly necessary or well-suited.

The post How to mess up DevOps: working at the wrong level of abstraction appeared first on Superset Blog.

]]>
https://blog.supersetinc.com/2017/12/23/mess-devops-wrong-level-abstraction/feed/ 1