Viddler Architecture - 7 Million Embeds a Day and 1500 Req/Sec Peak

Viddler is in the high quality Video as a Service business for a customer who wants to pay a fixed cost, be done with it, and just have it work. Similar to Blip and Ooyala, more focussed on business than YouTube. They serve thousands of business customers, including high traffic websites like FailBlog, Engadget, and Gawker.

Viddler is a good case to learn from because they are a small company trying to provide a challenging service in a crowded field. We are catching them just as they transitioning from a startup that began in one direction, as a YouTube competitor, and pivoted into a slightly larger company focussed on paying business customers.

Transition is the key word for Viddler: transitioning from a free YouTube clone to a high quality paid service. Transitioning from a few colo sites that didn't work well to a new higher quality datacenter. Transitioning from an architecture that was typical of a startup to one that features redundancy, high availability, and automation. Transitioning from a lot of experiments to figuring out how they want to do things and making that happen. Transition to an architecture where features were spread out amongst geographically distributed teams using different technology stacks to having clear defined roles.

In other words, Viddler is like most every other maturing startup out there and that's fun to watch. Todd Troxell, Systems Architect at Viddler, was kind enough to give us an interview and share the details on Viddler's architecture. It's an interesting mix of different technologies, groups, and processes, but it somehow seems to all work. It works because behind all the moving parts is the single idea: making the customer happy and giving them what they want, no matter what. That's not always pretty, but it does get results.

Site: Viddler.com

The Stats

  1. About 7 million embed views per day.
  2. About 3000 videos uploaded per day.
  3. 1500 req/sec at peak.
  4. ~130 people pressing the play button at peak.
  5. 1PB of video served in February
  6. 80T of storage
  7. 45,160 hours of CPU time spent on video encoding in last 30 days
  8. Usage is relatively flat throughout the day, with only a ~33% difference between valley and peak which suggests they get a lot of usage globally.  graphic.

The Platform

Software

  1. Debian Linux
  2. Rails 2.x- dashboard, root page, contest manager, various subfeatures
  3. PHP - various legacy subsites that use our internal APIs
  4. Python/ffmpeg/mencoder/x264lib - video transcoding
  5. Java 1.6 / Spring / Hibernate / ehcache- API and core backend
  6. Mysql 5.0 - main database
  7. Munin, Nagios, Catchpoint - monitoring
  8. Python, Perl, and ruby - many *many* monitors, scripts
  9. Erlang - ftp.viddler.com
  10. Apache 2/mod_jk - core headend for backend Java application
  11. Apache /mod_dav - video storage
  12. Amazon S3 - video storage
  13. Amazon EC2 - upload and encoding
  14. KVM - virtualization for staging environment
  15. Hadoop HDFS - distributed video source storage
  16. Nginx/Keepalived - Load balancers for all web traffic
  17. Wowza - RTMP video recording server
  18. Mediainfo - reporting video metadata
  19. Yamdi - metadata injector for flash videos
  20. Puppet - configuration management
  21. Git/Github https://github.com/viddler/
  22. Logcheck - log scanning
  23. Duplicity - backup
  24. Trac - bug tracking
  25. Monit - process state monitoring / restarting
  26. iptables - for firewalling - no need for hardware firewall - also handles NAT for internal network.
  27. maatkit, mtop, xtrabackup - db management and backup
  28. Preboot eXecution Environment - network boot of computers.
  29. Considering OpenStack's Swift as alternative file store.
  30. FFmpeg - a complete, cross-platform solution to record, convert and stream audio and video.
  31. Apache Tomcat
  32. MEncoder -  a free command line video decoding, encoding and filtering tool.
  33. EdgeCast - CDN

Hardware

  1. 20+ nodes in total in their colo:
    1. 2 Nginx load balancers.
    2. 2 Apache servers.
    3. 8 Java servers run the API.
    4. 2 PHP/Rails servers run the front-end.
    5. 3 HDFS servers
    6. 1 Monitoring server.
    7. 1 Local encoding server.
    8. 2 storage servers.
    9. 2 MySQL database servers run in master-slave configuration, plan on moving to 6 servers.
    10. 1 staging server.
  2. Amazon servers are used for video encoding.

The Product

Sign up for a Viddler account and they'll translate your video to whatever format is needed and it will display on any device. They provide an embed code, dashboard, and analytics.  Their goal is to wrap up the problem of video behind a simple interface so people can just buy the service and forget about it, it just works and does everything you need it to do. As a content provider you can sell views and add ads to content. They bring all ad networks into Viddler as kind of meta interface to do different ad platforms.

The Architecture

  1. The Current Era
    1. Their current system runs on bare metal in a colo somewhere in the Western US. They are in the process of moving to Internap in New York.
    2. A 1 gbps link connects them to the Internet. A CDN serves video and the video is loaded directly into Amazon for processing, so they don't need more network than this for their main site.
    3. The system is fronted by 2 Nginix load balancers, one active and the other passive using keepalived. Chose Nginix because it is Linux based, Open Source, and it worked.
    4. EdgeCast is used as the CDN to distribute content. Customers upload video directly to Amazon, the video is processed and uploaded to the CDN.
    5. Nginx can failover to the two Apache servers. The Apache servers can failover to one of the backend servers running Tomcat.
    6. Part of their architecture runs in Amazon: storage and their upload and encoding services.
    7. Experimented with Cassandra for storing dashboard positions. Very reliable, but will probably move to MySQL in the future.
    8. Two image scaling nodes at Linode for creating arbitrary thumbnails for videos. That will move to New York in the future.
  2. The Very Soon Era at Internap
    1. Original idea for the site was a free video site, YouTube but better. Then they pivoted to be more of high quality service which dictates the need for a better more reliable infrastructure.
    2. They are in the process of moving to Internap, so not everything has been worked out yet. Some issues in their previous datacenter motivated the move:
      1. Network issues, some BGP (Border Gateway Protocol) providers would stop working, they wouldn't peer automatically, and they would end up with a dead site for an hour and had to really push to have their datacenter manually remove the peer.
      2. They were subleased to a provider who kicked them out which meant they had to move two racks of servers with little lead time.
      3. Internap is well known for their good network, it's a better facility, and is more reliable.
    3. A major goal is to have complete redundancy in their architecture. Doubling the number of RTMP servers, dedicated error recording system, doubling monitoring servers, splitting out PHP and Rails servers, add dedicated image scaling servers, and double the number of encoding servers.
    4. Another major goal is complete automation. They will pixie boot computers over the network, get an OS on it, and configure packages from CVS. Currently their system is unreproducable and they would like to change that.
    5. Experimenting with HDFS as a file store for videos. They store 10% of their videos, about 20TB, on 3 HDFS nodes, and it has never been down.
    6. Current goal it to get everything moved over, the entire system to be autobuilt, and in version control, make sure ops guys are hired and to have a schedule.
    7. Observation that they are in a similar business to Amazon in that it's a lot cheaper to do everything yourself in the Video world, but then you have to do everything yourself.
    8. No plans to use a service architecture. They have an internal API and external API. Both are used create the site. There are higher reward features to implement than changing over to a service approach.
  3. Automation will transform everything.
    1. Portable VMs will allow them to reproduce build environments and live environments. Currently these are not reproducible. Everyone develops on their own OS using different versions of packages.
    2. Will allow them to iterate on architecture. Try new storage, etc by just downloading a new VM to run against.
    3. Make any transition to OpenStack less painful. Considering VMware as well.
  4. When you upload to Viddler the endpoint is on Amazon EC2 on a node running Tomcat.
    1. The file is buffered locally and sent to S3.
    2. The file is the pulled down from S3 for encoding.
    3. The encoding process has its own queue in a module called Viddler Core.
    4. They segregate out code the runs in their colo site and code that runs in Amazon. The code that runs in Amazon doesn't maintain state. A spawned node can die because all the state is kept in S3 or Viddler Core.
    5. A Python encoding daemon pulls work off the queue:
      1. Runs FFmpeg, MEncoder, and other encoders.
      2. There's lots of funky stuff about checking if iPhone video needs rotating before encoding and other tests.
      3. Each encoding node runs on an Amazon 8 core instance. Four encodings run at a time. They haven't tried to optimize this yet.
      4. Jobs are run in priority order. A live upload that someone wants to see right away will be handled before a batch job of say adding iPhone support to their encodes.
      5. Python daemons are long running daemons and they haven't see any problems with memory fragmentation or other issues.
  5. Exploring real-time transcode.
    1. In real-time encoding a node instance is fed like a multi-part form, streams it through FFmpeg, and then streams it out again. This could be part of their CDN.
    2. The biggest advantage of this architecture is there is no wait. Once a customer has uploaded a video it's live.
    3. The implication is only one video format of a file would need to be stored. It could transcoded on demand to the CDN. This could save the customer a lot of money in storage costs.
  6. Storage costs:
    1. CDN and storage are their biggest costs. Storage is about 30% of their costs.
    2. The average case for people who want their video to play on everything is four formats. HTTP streaming will be another format. So storage cost is a major expense for customers.
  7. Team setup:
    1. Local programmers do front-end in PHP/Rails. They are trying to move over all the front-end to this stack, though some of it is in Java currently.
    2. Core Tomcat/Java/Spring/Hibernate is coded by a team in Poland. The goal is for Java team to implement the API and backend logic.
    3. Plan on having a separate database for the PHP/Rails team because they move at much quicker pace than the Java team and they want to uncouple the teams as much as possible so they are not dependent on each other.
  8. Ran a reliability survey and found most of their outages were due to:
    1. Network connection problems. They are moving to new datacenter to fix this issue.
    2. Database overload. To fix this:
      1. The database contains about 100 tables. The User table has about 100 parameters, which includes information like encoding preferences. The schema still has legacy structure from when the site was hosted on Word Perfect.
      2. Triple database capacity.
      3. Use servers that are much faster and have more memory.
      4. Using a dual-master setup and 4 read slaves.
      5. Two read slaves will have a low query timeout for interactive traffic.
      6. Two slaves will be dedicated to reporting and will have long query timeouts. A slow report query will not take the site down with this approach. Their top queries are working on tables that have 10 million rows so calculating top videos takes a much longer amount of time than it used to because it started creating temp tables. Can cause the system to go down for 5 seconds.
  9. They are investigating running their own CDN using Squid inside their own colos.
    1. Maybe using a westcoast and eastcoast colos to have geographically distributed peers.
    2. For their customer they project they would need 4 sites in the US and one in Europe.
    3. EdgeCast gives them a good deal and provides them useful features like stats per CNAME, but on a profit basis building their own CDN would be worth the development effort. CDN costs are a substantial part of their cost structure and it would be worth squeezing that out over time.
  10. The future: Long term goal is to see how much money can be saved by getting out of Amazon, running storage locally, running OpenStack, and running their own CDN. That would save 80% of their non-people related operating expenses. From their own calculations they can do it way cheaper than Amazon.

Lessons Learned

  1. Mix and Match. They are using a combination of nodes from different providers. The CDN handles the content. Amazon is used for stateless operations like encoding and storage. Nodes in the colo are used for everything else. It may be a bit confusing having functionality in several different locations, but they are staying with what works unless there's a compelling business reason or ease of use reason to change. Moving everything to Amazon might make sense, but it also would take them away from their priorities, it would be risky, and it would cost more.
  2. Watch out for table growth. Queries that used to take a reasonable amount of time can suddenly crush a site once it grows larger. Use reporting instances to offload reporting traffic from interactive traffic. And don't write queries that suck.
  3. Look at costs. Balancing costs is a big part of their decision making process. They prefer growth via new features over consolidation of existing features. It's a tough balancing act, but consciously making this a strategic imperative helps everyone know where you are going. In the longer term they are thinking about how they can get the benefits of the cloud operations model while taking advantage of the lower cost structure of their own colo.
  4. Experiment. Viddler loves to experiment. They'll try different technologies to see what works and then actually make use of them in production. This gives them an opportunity to see if new technologies can help them bring down their costs and provide new customer features.
  5. Segment teams by technology stack and release flexibility. Having distributed teams can be a problem. Having distributed teams on different technology stacks and radically different release cycles is a big problem. Having distributed teams with strong dependencies and cross functional responsibilities is a huge problem. If you have to be in this situation then moving to a model with as few dependencies between the groups is a good compromise.
  6. Learn from outages. Do a survey of why your site went down and see what you can do to fix the top problems. Seems obvious, but it isn't done enough.
  7. Use free users as guinea pigs. Free users have a lower SLA expectation so they are candidates for new infrastructure experiments. Having a free tier is useful for just this purpose, to try out new features without doing great harm.
  8. Pay more for top tier hosting. The biggest problem they've had is picking good datacenters. Their first and second datacenters had problems. Being a scrappy startup they looked for the cheapest yet highest quality datacenter they could find. It turns out datacenter quality is hard to judge. They went with a top name facility and got a great price. This worked fine for months and then problems started happening. Power outages, network outages, and they eventually were forced to move to another provider because the one they were with was pulling out of the facility. Being down for any length of time is not acceptable today and a redundant site would have been a lot of effort for such a small group. Paying more for a higher quality datacenter would have cost less in the long run.
  9. What matters in the end is what the users sees, not the architecture. Iterate and focus on customer experience above all else. Customer service is even valued above sane or maintainable architecture. Build only what is needed. They could not have kick started this company maintaining 100% employee ownership without running ultra scrappy. They are now taking what was learned in scrappy stage and building a very resilient multi-site architecture in a top-tier facility. Their system is not the most efficient, or the prettiest, the path they took is the customer needs something so they built it. They go after what the customer needs. The way they went about selecting hardware, and building software, with an emphasis on getting the job done, is what built the company.
  10. Automate. While all the experimentation and solve the immediate problem for the customer stuff is nice, you still need an automated environment so you can reproduce builds, test software, and provide a consistent and stable development environment. Automate from the start.

I'd really like to thank Todd Troxell for taking the time for this interview.

And remember kids, if you are looking for work, Viddler is looking for you too.

  1. We CAN handle your traffic! by Colin Devroe