Here at Panda, we are constantly impressed with the requests that our customers have for us, and how they want to push our technology to new areas. We’ve been experimenting with more techniques over the past year, and we’ve officially pushed one of our most exciting ones to production.
Introducing frame rate conversion by motion compensation. This has been live in production for some time now, and being used by select customers. We wanted to hold off until we saw consistent success before we officially announced it :) We’ll try to explain the very basics to let you build an intuition of how it works – however, if you have any questions regarding this, and how to leverage it for your business needs, give us a shout at email@example.com.
Motion compensation is a technique that was originally used for video compression, and now it’s used in virtually every video codec. Its inventors noticed that adjacent frames usually don’t differ too much (except for scene changes), and then used that fact to develop a better encoding scheme than compressing each frame separately. In short, motion-compensation-powered compression tries to detect movement that happens between frames and then use that information for more efficient encoding. Imagine two frames:
Now, a motion compensating algorithm would detect the fact that it’s the same panda in both frames, just in different locations:
We’re still thinking about compression, so why would we want to store the same panda twice? Yep, that’s what motion-compensation-powered compression does – it stores the moving panda just once (usually, it would store the whole frame #1), but it adds information about movement. Then the decompressor uses this information to construct remaining information (frame #2 based on frame #1).
That’s the general idea, but in practice it’s not as smooth and easy as in the example. The objects are rarely the same, and usually some distortions and non-linear transformations creep in. Scanning for movements is very expensive computationally, so we have to limit the search space (and optimize the hell out of the code, even resorting to hand-written assembly).
Okay, but compression is not the topic of this post. Frame rate conversion is, and motion compensation can be used for this task too, often with really impressive results.
For illustration, let’s go back to the moving panda example. Let’s assume we display 2 frames per second (not impressive), but we would like to display 3 frames per second (so impressive!), and the video shouldn’t play any faster when we’re done converting.
One option is to cheat a little bit and just duplicate a frame here and there, getting 3 FPS as a result. In theory we could accomplish our goal that way, but the quality would suck. Here’s how it would work:
Yes, the output has 3 frames and the input had 2, but the effect isn’t visually appealing. We need a bit of magic to create a frame that humans would see as naturally fitting between the two initial frames – panda has to be in the middle. That is a task motion compensation could deal with – detect the motion, but instead of using it for compression, create a new frame based on the gathered information. Here’s how it should work:
These are the basics of the basics of the theory. Now an example, taken straight from a Panda encoder. Let’s begin with an example of how frame duplication (the bad guy) would look like (for better illustration, after converting FPS we slowed down the video, and got slow motion as a result):
See that jitter on the right? Yuck. Now, what happens if we use motion compensation (the good guy) instead:
It looks a lot better to me, the movement is smooth and there are almost no video artifacts visible (maybe just a slight noise). But, of course, other types of footage are able to fool the algorithm more easily. Motion compensation assumes simple, linear movement, so other kinds of image transformations often produce heavier artifacts (they might be acceptable, though – it all depends on the use case). Occlusions, refractions (water bubbles!) and very quick movement (which means that too much happens between frames) are the most common examples. Anyway, it’s not as terrible as it sounds, and still better than frame duplication. For illustration, let’s use a video full of occlusions and water:
Okay, now, let’s slow it down four times with both frame duplication and motion compensation, displayed side-by-side. Motion compensation now produces clear artifacts (see those fake electric discharges?), but still looks better than frame duplication:
And that’s it. The artifacts are visible, but the unilateral verdict of a short survey in our office is: the effect is a lot more pleasant for motion compensation than frame duplication. The feature is not publicly available yet, but we’re enabling it for our customers on demand. Please remember that it’s hard to guess how your videos would look like when treated with our FPS converter, but if you’d like to give it a chance and experiment a bit, just drop us an email at firstname.lastname@example.org
The edge of video transmission is moving quickly, just to mention HD television being mainstream for some time and 4K getting traction; H264 being ubiquitous, and HEVC entering the stage. Yet most people still remember VHS. It’s good to be up with the latest tech, but unfortunately the world is lagging behind most of the time.
Television is a different universe than Internet transmission. The rules are made by big (usually government) bodies and rarely change. Although most countries have switched to digital transmission, standard definition isn’t gone yet – SD channels are still very popular, which forces content providers to support SD formats too.
Recently, we’ve helped a few clients to craft transcoding pipelines that support all these retiring-yet-still-popular formats. We’ve noticed that it’s a huge nuisance for content makers to invest in learning old technology and that they would love to shed the duty on someone else; so we made sure that Panda (both the platform and the team) can deal with these flawlessly.
There’s a huge variability among requirements pertaining SD: for example, you have to decide how the image should be fitted into the screen. High-quality downsampling is always used, but you have to decide what to do when the dimensions are off: should you use letterboxing, or maybe stretch the image?
Another decision (which usually is not up to you) is what exact format should be used. This almost always depends on the country the video is for. Although the terms NTSC, PAL and SECAM come from the analog era (digital TV uses standards like ATSC and DVB-T), they are still used to describe parameters of encoding in digital transmission (e.g. image dimensions, display aspect ratio and pixel aspect ratio). Another thing the country affects is the compression format, the most popular are MPEG-2 and H.264, though they are not the only ones.
Standard television formats also have specific requirements on frame rate. It’s a bit different than with Internet transmission, where the video is effectively a stream of images. In SD TV, transmission is interlaced, and instead of frames it uses fields (which contain only half the information that frames do, but allow to save up bandwidth).
Frame rate is therefore not a very accurate term here, but the problem is still the same – we have exact number of frames/fields to display per unit of time, and the input video might not necessarily match that number. In such case the most popular solution is to drop and duplicate frames/fields according to the needs, but quality of videos produced this way is not great.
There is a solution, though, but it’s so complicated that we’ll just mention it here – it’s motion compensation. It’s a technique originally used for video compression, but it also gives great results in frame rate conversions. It’s not only useful for SD conversions, we use it for different things at Panda, but it helps here too.
Well, it’s definitely not the end of the story. These are the basics, but the number of details that have to be considered is unfortunately much bigger. Anyway, if you ever happen to have to support SD television, we’re here to help! Supporting SD can be as easy as creating a profile in Panda:
For the last decade, content marketing has been dominated by video streaming. Whether you’re a comedian posting funny videos to build a following or a business creating an informative product demo to help your viewers, choosing the right type of video message is crucial to boosting views and rankings. Here is a closer look at some of the most popular forms of video marketing for various content types.
Streaming media is the foundation on which the social Internet runs. Over the last two years, Twitter and Instagram have piggybacked on the social video marketing success of YouTube and Facebook. Twitter released Vine, which allows the user to post and share six-second videos, while Instagram added video-streaming capabilities to their regular feeds.
The benefit of choosing social video is that it has the ability to reach many people in a short amount of time. If your video is only 30 seconds to a minute long and designed to capture your viewer’s attention within the first five seconds, there’s a better chance of getting more views, likes, and shares.
This type of content marketing is great for short messages, entertainment (i.e., funny videos), and sales messages.
There is a misconception that people won’t take time out of their daily routines to watch a video that is more than three minutes long. YouTube was built around this belief and, up until a few years ago, was dominated by it. The Coca-Cola Company has paved the way for those needing to deliver a message that cannot be adequately expressed in under five minutes, but still want to reach their streaming media audiences.
The seven-minute animated video for their new car line was a mixture of important information, humor, and entertaining visuals. It was quickly embraced by viewers and sent across the social media world. Other companies have also taken this route. Some videos have breached the 10-minute mark, reaching up to almost half an hour in length.
This type of video marketing can be tough, but for those with an important message to deliver and a penchant for creativity, the online film may be the answer to avoiding traditional marketing routes. Online films are best suited for extended product sales pitches or for providing a visual checklist of content that appeals to your audience.
Adam Hasler builds digital products. He’s the lead developer at The Big Studio, a design-focused consultancy based in Boston. Not only does Adam engage in a lot of design, but he also does all the coding. Like other digital product leaders, Adam Hasler is first and foremost a developer and a designer. When it comes to other projects like video encoding, it’s usually out of scope for a typical day’s work.
Adam’s been working on an app that’s used by psychologists as an assessment mechanism. It’s the project of a psychologist, who’d been applying this assessment framework on paper, and administering it to people that way.
Adam’s task was to build a video quiz where subjects could click on a video and give their feedback. The video quiz component would then record where they clicked, and allow them to give feedback on why that moment resonated for them. Each subject’s feedback would then be compared to that of experts to assess whether or not they could read a situation as well.
“I needed to build a tool where subjects watching a video could say, ‘There, right there, that thing that happened is what I think is important,’” explains Adam Hasler. “In my first test build, I used a solution that involved uploading a video and running it through a script. It didn’t work. It was a disaster.”
To complete the project, Adam needed to build both a testing and an authoring component. Psychologists needed to be able to write the tests, so there were two user personas in that sense: a tester and a test taker. The tester would always be a psychologist, who wasn’t technologically savvy, so Adam had to make a really good test editing interface. Because of the nature of the project, he needed to:
“I discovered Panda through Heroku, and it ended up being the best solution,” says Adam. “With Panda, psychologists can author an assessment video by dragging and dropping it into a container I built. Panda uploads it, and collects the feedback. We don’t have to worry about uploading 4 different video file types, because Panda encodes the videos to work on different browsers.”
Thanks to Panda, the project has been highly successful. One of the key user personas is the Department of Defense, which is testing subjects for their response to conflicts.
“Because of the interactivity, I needed more than a video on a page,” explains Adam. “With Panda, I get that beautiful little jSON object back with all the information I would need to make all the difference for this very little, key component. I love Panda! It made my life so much easier. I think it’s so cool.”
7 minute read
On the surface, Panda is a pretty simple piece of software – upload a video, encode it into various formats, add a watermark or change frame rate, and deliver it to a data store.
Once you spend some time with it, it begins to show how complex each component can be – and how important it is to continuously improve each one.
When Panda was first built, it worked beautifully, and it was quick! But as time went on, and the volume of videos encoded per day increased, it became obvious that to keep pace with increasing speed requirements from customers, and maintain growth – core parts of the platform were going to need to be rethought.
We started looking at each component piece by piece, to find bottlenecks, optimize throughput and keep a fair operating expense so we could retain our price leadership. Panda might be a software platform – but having read the ‘The Goal‘ by Eli Goldratt about a manufacturing plant really reminded us of the process. (It’s a great read btw).
In July we updated to the most current versions of Ruby and Go – and added a memory cache to tasks that were maxing out our instances. Then we tackled the big scale bottleneck – the job manager.
The Job Manager is built to ensure that our customers video queues get processed as close to real-time as possible, and distributes transcoding jobs to the encoder clusters. Whether it’s 2000 encoders on 8 CPU cores each, or 1 encoder on 1 CPU core it’s important it’s allocated correctly.
It monitors all encoding servers running within an environment, receives new jobs, and assigns them to instance pools.
The Panda Job Manager was a single thread Ruby process, which worked well for quite some time. We noticed it would start struggling during peaks, and we had to do something about it. We started looking at where we could optimize it, by identifying each bottleneck one by one.
It was obvious that events processing was too slow in general, but before we even fired up a profiler, we managed to find a huge one just by looking at logs and comparing timestamps.
Short digression: We use Redis queues for internal communication, and there was one such queue where all messages for the manager were being sent. The manager was constantly pooling this queue and most of its work was based on messages it received. Each encoding server had a queue in Redis too, and all these queues were used for communication between the manager and encoders.
Because a single Redis queue was used for new jobs as well as manager/encoders communication, huge numbers of the former were causing delays in the latter. And a slow down in internal communication meant that some servers were waiting unnecessarily long for jobs to be assigned.
The obvious solution was to split the communication into two separate queues: one for new jobs and another one for internal messaging. Unfortunately, Redis doesn’t allow blocking reads from more than one queue on a single connection.
We were forced either to implement Redis client that would use non-blocking IO to handle more that one connection in a single thread, or resort to multiple threads or processes. Writing our own client seemed like a lot of work, and Ruby isn’t especially friendly if you’d like to write multithreaded code (well, unless you use Rubinius).
Before trying to solve that, we launched manager within a profiler to get a clearer picture. It turned out that roughly 30% of time was spent at querying the database (jobs were saved, updated and deleted from the DB), and the remaining 70% was just running the Ruby code. Because we were a few orders of magnitude slower that we wished, optimizing neither just the database nor the Ruby code would be enough (and we still had to solve the queues issue). We needed something more thorough that a simple fix.
We started by rewriting the manager in Go. We didn’t want to waste time on premature optimization, so it roughly was a 1:1 rewrite, just a few things were coded differently to be more Go-idiomatic – but the mechanics stayed the same.
The result? Those 70% that were previously spent on Ruby code dropped to about 1%! That was great, we got almost 70% speed-up, but we were still nowhere near where we wanted.
Then we fixed the queues issue. With Go’s multithreading model is was so simple that it’s almost not worth mentioning – we even accidentally got a free message pre-fetching in a Go channel (another thread pools Redis and pushes messages to a buffered channel). And this was a huge kick – now we could handle more than 16,000,000 jobs per day per job manager.
We could have pushed it harder, but we still hadn’t even started profiling our new Go code at this point. Golang has great tools for profiling, so rather quickly we were able to go through the bottlenecks (it was database almost all the time). When we decided that it’s enough, we started testing… And we just couldn’t get enough EC2 instances to reach manager’s limit. We ended at about a bit fewer than 80,000,000 jobs per day and even a sign of sweat wasn’t visible on manager.
The end result of this phase is a technical architecture that clears queues much faster, and for the same encoder price, delivers better throughput and greatly enhanced encoder bursting (especially good during the holiday season where we often have customer that ratchet up activity by 100x!), and more automation. We’re not done yet – and we have some fantastic features coming in 2015 that the new back-end enables us to deliver.
Do you have a suggestion or have some knowledge you’d like to share with us? We’d love to hear from you – get in touch email@example.com anytime (we’re 24×7).
Thanks to everyone who came out to Exosphere 2014! It was a great turnout, and good times were had by all.
On Wednesday, November 12th, at approximately 18:30 Pacific Standard Time (PST), all those invited received a transmission via Short Message Service (SMS) with instructions as to where our launch into the Exosphere was going to be.
Once boarding the lift and arriving to the rooftop launch pad, cosmonauts were greeted by balloons, a DJ, plenty of sushi, a poker room, and drinks all around.
Thanks for helping us throw a great afterparty. Some notables from: Rackspace, StatusPage, Box, Carbonite, Docker, Amazon, New Relic, Adobe, HTC, Pivitol, Cloudability, Zynga, Ooyala and more!
Status reports show that the mission was a success, and all reported an enjoyable time. There must have been some turbulence though, as some were late to scheduled check-ins the following day.
The countdown has begun for next year’s ship. Be sure not to miss it!
We recently spoke to Bruno Freitas, the Lead Front-End Engineer at CloudWalk. CloudWalk is an open payment platform based out of Sunnyvale, US that has also offices in Brazil. The payment solution for POS (point of sale) is already processing credit and debit card transactions on POS terminals in retail outlets in the market.
Knowing that they wanted to create great customer experiences even during unexpected downtime, CloudWalk went in search of a tool that would allow them to share news and embrace transparency with their customers. They were looking for:
Most especially, they wanted tools that could turn a potentially negative situation into an excellent customer experience.
“That’s when we found Pingdom to help us keep track of and monitor our services,” explains Bruno Freitas. “I knew I could just write an application that would use Pingdom’s API data, and create an interface on top of that. And that was the plan, actually. But then I saw something mentioning that we could use a service that would consume Pingdom’s data, and that service was StatusHub. I gave it a try and it was seamless. Everything was easy. We decided to stick with StatusHub, because everything was just worked.”
“Other solutions couldn’t offer the seamless integration that StatusHub does with Pingdom,” reports Bruno Freitas. “That was one of the main features that made us decide to commit to StatusHub.”
Besides connecting to other tools that CloudWalk already uses like Pingdom, StatusHub also allows CloudWalk to show off their uptime history and performance metrics to their customers. “In addition to the integration, StatusHub’s service is really solid,” Bruno Freitas adds. “I’ve never had a problem with anything. StatusHub’s support docs are great.
StatusHub incidents are mostly triggered by webhooks or the API integration. But they can also be updated manually, which some customers choose to do.
When you login to your StatusHub account, you can click on the clock icon on the far right to see history. Then, click “Incidents history” to see the list of all incidents recorded on your status site.
We’ve added additional visibility on this page so you can review all of them on the same page, rather than browsing through time on your public page for them.
You can see the incident, and the history of it by clicking on “Show incident history”.
Telecommunications / ISP
Pro-active customer service, real-time frequent updates
Attractive user interface, good pricing
Founded in 2002, Australia-based Ace Communications is a telecommunications and ISP services company. Ace provides Internet and phone connections, mobile services, hosted VOIP, and other services, like traditional web hosting.
Ace Communications has 9 different products that needed a status update solution. Ace had been using an internal wiki, and then notifying their customers of status changes on Twitter.
Although highly skilled, Ace’s development team did not want to develop a status solution for themselves. Building their own status page was problematic, because:
From initial deployment to ongoing maintenance and development, the team at Ace knew that it would more cost effective to find a best-in-class status update solution.
“Finding a status solution meant massive cost savings” says Dale Munckton, Ace’s founder.
Dale lives and breathes the tech side of things, and is always engaged on the sales side as well, so having an industry-leading status update tool was important to him.
The Ace team looked at all the options for status update tools. StatusHub was their #1 choice. Here’s why:
Ace saw the most appealing feature of StatusHub as the user experience it could provide to their customers. They found the layout and presentation of StatusHub quite clean by comparison to other status update tools that they found overly complicated.
StatusHub allows Ace to pro-actively let people know about downtime. This is a big time saver. They also liked that their customers could subscribe and choose what notification method they wanted.
Price-wise for the features, StatusHub sits very well in comparison to its competition.
Ace’s new status page is online at status.acecommunications.com.au. Featured on Ace’s home page, the company encourages its customers to subscribe to automatic updates via SMS and email. They also explain that the status page is mobile friendly, so if Ace’s customers are having any issues, they can check for known outages on mobile.
With tools like SMS updates, StatusHub has reduced Ace’s status call follow-ups by 50% to 75%.
“When Ace has downtime, we send our customers an email through StatusHub to update them every few hours, even if it’s just to say that engineers are still working on it,” Dale Munckton explains. “If our customers don’t hear from us after a bit of time, they often wonder what’s going on, and then the calls start coming in. If we’re constantly updating StatusHub, it helps with our customer service.”
As a best practice, when there’s an outage and customers call in, Ace’s support team coaches them on how to subscribe to StatusHub notifications. They even walk their customers through the process on the phone, or subscribe them to updates through the StatusHub tool.
“The biggest time saver for Ace is taking inbound call traffic away from our support team,” says Dale Munckton. “Subscribing our customers to StatusHub means that we don’t need to give people an update over the phone.”