In this post please find links to the entire OTT & Multiscreen Digital Video Series. If you click on the thumbnail, then it will open the PDF article (for subsequent download). If you click on the link below the thumbnail it will be redirect you to the original web article.
I. Consumption is Personal
Broadcast providers had a relatively difficult task in understanding their audience, in the days of linear television. In the absence of the internet, adjusting to subscriber behavior was slow, in comparison to the real-time nature of internet video. Today online video providers have the ability to experience a one-to-one conversation with their audience. Viewing habits of consumers will continue to rapidly change in the next ten years. This will require accompanying changes in advertising expenditure. In the global nature of internet video, these online services will need to optimize accordingly to capitalize on these market opportunities.
The evolution from traditional TV viewing to online video has been swift. This has significantly disrupted disc sales such as DVD and Blu-Ray, as well as cable and satellite TV subscriptions. With the newfound ability to consume content anytime, anywhere, and on any device, consumers are re-evaluating their spending patterns. In this paper we will discuss these changes in buying behavior, and identify the turning-point when all this started to accelerate.
Transcoding large video libraries are a time consuming and expensive process. Maintaining consistency in video quality helps to ensure that storage costs and bandwidth is used efficiently. It is also important for video administrators to understand the types of devices receiving the video, so that subscribers are getting the most optimal viewing experience. This paper discusses the differences in quality in popular video codecs, including the recently ratified H.265 specification.
IV. Search & Discovery Is a Journey, not a Destination
Television subscribers have come a long way from the days of channel hopping. The arduous days of struggling to find something useful to watch is now securely behind us. As consumers look to the future, the ability to search for related interests and discover new interests is now established as common practice. This paper discusses the challenges that search and discovery engines face in refining their services, in order to serve a truly global audience.
V. Multiscreen Solutions for the Digital Generation
Broadcast, as a whole, is becoming less about big powerful hardware and more about software and services. As these players move to online video services, subscribers will benefit from the breadth of content they will provide to subscribers. As the world’s video content moves online, solution providers will contribute to the success of internet video deployments. Support for future technologies such as 4K video, advancements in behavioral analytics, and the accompanying processing and networking demands will follow. Migration to a multiscreen world requires thought leadership and forward-thinking partnerships, to help clients keep pace with the rapid march of technology. This paper explores the challenges that solution providers will face in assisting curators of content to address their subscriber’s needs and changing market demands.
VI. Building a Case for 4K, Ultra High Definition Video
Ultra High Definition technology (UHD), or 4K is the latest focus in the ecosystem of video consumption. For most consumers this technology is considered far from consumer reach, if at all necessary. In fact, 4K is right around the corner, and will creep into the mind-share of consumer wish-lists by the end of this decade. From movies filmed in 4K, to archive titles scanned in UHD, there is a library of content just waiting to be released. Furthermore, today’s infrastructure is converging to meet the demands of 4K, including internet bandwidth speeds, processing power, connectivity standards, and screen resolutions. This paper explores the next generation in video consumption and how 4K will stimulate the entertainment industry.
Social TV brings viewers to content via effective brand management and social networking. Users recommend content as they consume it – Consumers actively follow what others are watching – Trends drive viewers to subject matters of related interests. Integration of Facebook, Twitter, Tumblr and other social networks become a natural part of the program creation and engagement of the viewing community. Social networks create an environment where broadcasters have unlimited power to work with niche groups without geographic limits. The only limitations are those dictated by content owners and their associated content rights, and corporate culture preventing broadcasters from evolving to a New Media world.
VIII.-X. Turning Piratez into Consumers, I, II, III, IV, & V
Content Protection is a risk-to-cost balance. At the moment, the cost or piracy is low, the risk is low, and the enforcement is not ubiquitous. There is no silver bullet to solving piracy, but steps can be taken to reduce their levels to something more acceptable. It is untrue that everyone who pirated would refuse to buy a product legally. It is equally untrue that every pirated copy represented a lost sale at full download price. If the risk is too high, and the cost is low enough, then less people would pirate content. This paper explores how piracy has evolved over the past few decades, and discusses the issues around copyright infringement in the entertainment industry, and proposed steps to convert Piratez into consumers.
The future of digital video is expanding in all directions; from the size of the living room TV, to the depth of content selection, and to the different types of devices which serve content. A culmination of technologies is brewing that is bringing an IMAX-esque[1] experience to the living room. It is not difficult to imagine that in the next ten years subscribers will be unraveling and gluing their TV’s onto their wall. A culmination of the following innovations will make this happen:
Televisions are growing to the size of an entire wall. Several 100” television sets (2.5 meter diagonal) have been introduced to the market over the years, and prototypes of even larger screens have also been showcased. As screen sizes continue to increase, the only limiting factor will be the available wall space.
Displays are verging on the thinness of credit cards, thanks to technology such as OLED[2]. Organic Light-emitting Diode, displays have been recently introduced in 2013 as thin as 4mm[3] by LG[4]. Although OLED had a slow start due to high manufacturing costs and other technical issues, it still offers a promising future for ultra-thin and ultra-high resolution displays. Namely due to the fact that each pixel is self-emissive (i.e. they emit light without requiring a back-lit layer). As screens become thinner, this leads to the inevitable availability of…
Flexible displays[5]. These have also been announced from manufacturers such as Sony[6], Samsung[7], as well as display technology manufacturer, Corning[8].
Higher resolutions are now being introduced to the market such as 4K[9] (aka. UHD, Ultra High Definition video). When display technology verges on the size of walls, then even 4K will not satisfy consumers, and 8K, or higher will begin to steal the attention of consumers.
Computing power to crunch through all that Ultra High resolution data is readily available.
The ability to deliver hundreds of megabytes in bandwidth[10] to the average consumer is on the horizon.
Figure i – Top 16 cities for High-speed connections @35 US$ per month
These advances in home video may seem like a distant dream, but the future is closer than most realize. 4K television was a hot topic at the Consumer Electronics Show, CES ’13[11], in Las Vegas this year. But some consumers may feel that 4K has been introduced too soon. Especially considering that Blu-Ray only recently reached strong sales momentum, and HDTV[12] has finally established a firm foothold in the living room – penetrating also the mobile market. So why 4K video, and why now?
When viewed from the perspective of technology penetration, this is the perfect time to introduce Ultra HD. Higher resolution displays immediately benefit consumers wanting the most real-estate on their devices. More windows, icons, and widgets can be displayed side-by-side in all their high-resolution glory. Businesses and enthusiasts have already been using display resolutions higher than HD for several years. For example, the popularity of 2650×1600[14] (2.5K displays perhaps?) steadily increased as prices dipped below $1000. More recently, Apple released their retina[15] displays in the latest generation of iPad’s (2048×1536 resolution at 264 pixels per inch, ppi) and MacBook Pro laptops, at even higher resolutions (2880×1800 @ 220 ppi). Consumers are quickly becoming acclimated to high pixel densities. Retina displays enhance the subscriber’s viewing experience on smartphones, tablets, and laptops, and create a precursor for ultra-high resolution content.
So how will this Ultra HD content reach the subscriber in the first place? Some cities already offer bandwidth that can accommodate a 4K live video stream[16]. According to New America Foundation[17], at least 12 cities currently offer affordable download speeds above 30Mbps (Figure i) – This is well within the bandwidth requirements of a single 4K video live stream[18] (assuming typical streaming quality, and that the internet pipe isn’t being used for anything else). Moreover, this is offered at a very reasonable fee of 35 US$ per month[19]. At a national level, Asia Pacific rank in the top three. European countries share six of the top ten positions, and the United States holds steady in ninth place (Table I). Even though the national average of some countries can barely accommodate a real-time high definition stream (typically between 4Mbps to 8Mbps used for online HD streaming), the peak download speeds exceeds 30Mbps are enjoyed in a select number of cities around the world.
Table I – Top Countries for Average & Peak Internet Speeds
In any case, sending 4K over today’s internet connections will not be optimal using today’s encoding standards. Streaming encoders would need to utilize the newly finalized H.265[i] format. Current tests show a 15%-20% improvement on the currently ubiquitous H.264 codec, but as implementations of the codec are optimized, promises of a 50% improvement in compression efficiency is anticipated. This means that a 4K movie streaming with a frame aspect ratios of 2:39:1 (aka. CinemaScope typical for Hollywood movies) could be delivered quite comfortably within an existing 30Mbps internet connection. Alternatively (as shown in Table II[ii]), in the case of a Video on Demand (VoD[iii]) service, a 4K movie could be downloaded within 1.5 hours over a 30Mbps connection. HD content using the same service would download in just under 20 minutes, and standard definition (SD) content would complete in little over six minutes[iv].
Figure ii – 4K digital video to the Consumer – Minimizing the Bottleneck
This begs the question; Is there a bottleneck today, in delivering 4K video to consumers? In fact, it could be argued that for major cities in the top 40 countries in the world, there no bottleneck[17]. Internet speeds are continually improving, expanding to new cities, and becoming affordable. Further down the pipe, WiFi standards such as 802.11ac[v] promise theoretical bandwidth capabilities from 87Mbps and higher, to comfortably carry several 4K streams (Figure ii). Alternatively, LAN[vi] speeds of 100Mbps have been available for over a decade, in consumer electronics. As for the final connection between the set-top-box and the TV, the current iteration of HDMI 1.4 already has the capacity to deliver a 3840×2160p (progressive scan) signal at 24 or 30 frames per second, (or 4096×2160p at 24fps). But it is the development of HDMI 2.0, currently in the works, that will extend support to 60fps. This is important because broadcasters will send 4K content to subscribers at their usual 60 frames per second (fps) used in the USA, or 50fps (used in Europe). Furthermore, HDMI v2.0 will support a Transition-Minimized Differential Signal (TMDS[vii]) of 18Gbps which is ample bandwidth for the final delivery of uncompressed 4K video to the television.
Table II – File size, & download times by Video type
To be fair, the main bottleneck in delivering 4K video to consumers is likely in the processing power of the devices responsible for encoding and decoding video. H.265 is expected to take as much as ten times longer to encode video, compared to H.264. Furthermore, 4K has four times the real-estate compared to HD. Therefore, curators of video transcoding should anticipate at least a 40x increase in encoding time when comparing HD@H.264 encoding to 4K@H.265. Thankfully, decoding of H.265 is only two to three times more costly compared to H.264. So adding 4K to the frame will require consumer processors to be at least ten times more powerful than they are today. Whether it be a set-top box, gaming console, or media center appliance, these CPUs will need to be; a) powerful enough to decode 4K in combination with H.265; and b) affordable for the price sensitive consumer electronics (CE) market.
Figure iii – Flexible Displays by PowerFilm
An IMAX-esque Experience
While a full back-catalog of digitally restored Blu-Ray content is voraciously being released on a weekly basis, there are looming questions regarding the absence of available 4K content. Certainly, 4K TV can only be successful if content is available to take advantage of its glorious resolution. But this leads to the inevitable chicken and egg predicament; Which should come first, a) the infrastructure supply chain all the way down to the display, or b) the content? It certainly makes sense that display technology should precede the release of complementary content, and this has ultimately been the industry approach for introducing 4K.
To fuel the anticipated transition to UHD, a wave of film restoration over the past decade has resulted in the scanning and digitizing the Hollywood back-catalog. Thanks to digital restoration pioneers such as Lowry Digital[ii], now owned by Reliance Big Entertainment, high ticket items such as the Disney[iii] and the James Bond[iv] collections were some of the first titles to be digitally restored. At the moment, as many as eighty Blu-Ray titles are being released on a weekly basis[v] – some of which are digitally restored back-catalog titles, and others are recent theatrical releases filmed using 4K digital cameras. It has become standard practice to scan and digitally restore old film masters to 4K, then transcode or downres[vi] the frames for distribution to DVD (standard definition, SD) and Blu-Ray (high definition, HD). For the time being consumers are not aware of an existing 4K version of these titles, nor have access to them. But when the time comes for studios to release their catalogues in 4K, they will have a relatively easy task to prepare them for public distribution.
It’s worth pointing out that the presentation of these 4K digital restorations are inevitably better than when they were originally premiered in movie theater decades earlier – A time of sub-standard lens optics (from today’s vantage point), and were scratches and pops on analog film reels was considered the norm.
Figure iv – RED One 4K (left) & Epic 5K (right) Digital Cinema Cameras
Restoration aside, movie production using native 4K digital cameras was introduced long ago by RED Digital Cinema[vii] – first with their RED One[viii] in 2007, and then with the RED Epic[ix] in 2010 supporting 5K (5120×2700) resolution. Founding member and first employee of RED, Ted Schilowitz commented at NAB ’13 In Las Vegas, “Since we introduced RED back in NAB ’07, thousands of movies have been filmed using our cameras. And it’s not just Hollywood that’s into 4K and 5K production – international studios, and enterprises have joined in as well.”
Figure v – RED Dragon 6K sensors scheduled for upgrade, at NAB ’13 in Las Vegas
RED continues to lead the market with their introduction of the RED Dragon, announced on the 8th of April, 2013. This new sensor extends their Mysterium® range to 6K – a sensor that supports 6144×3160 – and has an equivalent resolution to a 19 megapixel camera. The Dragon far exceeds the pixel density of any competing 4K camera from competitors[xi] that have recently entered into the UHD production space. Sony is also fighting for market share, with the introduction of their F65[xii], claiming an 8K sensor, although the true pixel count is measured around 5782 x 3060[xiii] (or over 17 million pixels – thus closer to 6K resolution).
Crossing the 4K Chasm
Likely the first 4K experience for consumers has already been – or soon will be – at the cinema. Although most theaters are outfitted with digital projectors using 2K (2048 × 1080)[xiv], they are steadily upgrading to 4K. As the ultra-high definition experience becomes ubiquitous in theaters, movies produced in UHD will be projected[xv] natively, without any reduction or compromises in pixel resolution.
The distribution of 4K content to theaters is an ongoing challenge. With the shadow of piracy looming, content needs to be delivered such that the following contingencies are addressed:
Content Delivery – As the Internet becomes the vehicle to distribute movies to selected cinemas, the appropriate DRM (Digital Rights Management) and encryption mechanisms need to protect the content.
Content Storage – Distributed 4K films require tamper-proof hardware to ensure that content is securely protected while at rest.
Content Rights – Centrally established usage policies are needed so that movies are projected at authorized times, and aptly expire – as authorized by the content distributors.
Content Quality – Maintaining consistency in quality through the use of industry certified projectors, screens, optics, and audio quality is essential to ensuring uniformity in the 4K cinema experience. Using the recently ratified H.265 standard will maintain efficiency in file size and bandwidth, while maximizing video quality.
Figure vi – Penetration of Selected Audio & Video Technologies in U.S. Households since 1981
Finally, it’s worth mentioning the anticipated rate of market adoption of 4K when compared to previous technologies. Studies show that the rate of adoption is increasing with every new technology. The CD[xvii], took 16 years to reach 70% penetration in U.S. households[xviii]. It then took six years to reach the same adoption for DVD[xix] (introduced in 1998)[xx]. HD Television has grown at a similar pace, fuelling Blu-Ray sale in the process. By chance or design as each technology reached 70% penetration, a new format was introduced to consumers (Figure vi). Interestingly, none of the past four recessions adversely affected adoption of these technologies[xxi]. With the introduction of 4K televisions in 2013, it is entirely feasible for 4K to reach similar adoption rates by the end of this decade.
Before consumers benefit from 4K, its successor is already being showcased. 8K[i] supports resolution up to 7680×4320, and has already been demonstrated by NHK[ii], Japan’s public broadcasting organization[iii]. 8K is essentially equivalent to viewing every single frame in the video at the resolution of a 33 megapixel camera.
When 4K eventually arrives to the living room, consumers will turn their attention to 8K and beyond. HD will be relegated to history class, and on display at the local technology museum.
In the days of linear television, broadcasters had a difficult task in understanding their audience. Without a direct broadcasting and feedback mechanism like the Internet, gauging subscriber behavior was slow. Today, online video providers have the ability to conduct a one-to-one conversation with their audience. Viewing habits of consumers will continue to rapidly change in the next ten years. This will require changes in advertising expenditure and tactics.
The evolution from traditional TV viewing to online video has been swift. This has significantly disrupted disc sales such as DVD and Blu-Ray, as well as cable and satellite TV subscriptions. With the newfound ability to consume content anytime, anywhere, and on any device, consumers are re-evaluating their spending habits. In this paper we will discuss these changes in buying behavior, and identify the turning point of these changes.
Transcoding large video libraries is a time consuming and expensive process. Maintaining consistency in video quality helps to ensure that storage costs and bandwidth are used efficiently. It is also important for video administrators to understand the types of devices receiving the video so that subscribers can enjoy an optimal viewing experience. This paper discusses the differences in quality in popular video codecs, including the recently ratified H.265 specification.
IV. Search & Discovery Is a Journey, not a Destination
Television subscribers have come a long way from the days of channel hopping. The arduous days of struggling to find something entertaining to watch are now behind us. As consumers look to the future, the ability to search for related interests and discover new interests is now established as common practice. This paper discusses the challenges that search and discovery engines face in refining their services in order to serve a truly global audience.
V. Multiscreen Solutions for the Digital Generation
Broadcasting, as a whole, is becoming less about big powerful hardware and more about software and services. As these players move to online video services, subscribers will benefit from the breadth of content they will provide to subscribers. As the world’s video content moves online, solution providers will contribute to the success of Internet video deployments. Support for future technologies such as 4K video, advancements in behavioral analytics, and accompanying processing and networking demands will follow. Migration to a multiscreen world requires thought leadership and forward-thinking partnerships to help clients keep pace with the rapid march of technology. This paper explores the challenges that solution providers will face in assisting curators of content to address their subscriber’s needs and changing market demands.
VI. Building a Case for 4K, Ultra High Definition Video
Ultra High Definition technology (UHD), or 4K, is the latest focus in the ecosystem of video consumption. For most consumers this advanced technology is considered out of their reach, if at all necessary. In actual fact, 4K is right around the corner and will be on consumer wish lists by the end of this decade. From movies filmed in 4K, to archive titles scanned in UHD, there is a tremendous library of content waiting to be released. Furthermore, today’s infrastructure is evolving and converging to meet the demands of 4K, including Internet bandwidth speeds, processing power, connectivity standards, and screen resolutions. This paper explores the next generation in video consumption and how 4K will stimulate the entertainment industry.
Social TV brings viewers to content via effective brand management and social networking. Users recommend content as they consume it, consumers actively follow what others are watching, and trends drive viewers to subject matters of related interests. The integration of Facebook, Twitter, Tumblr and other social networks has become a natural part of program creation and the engagement of the viewing community. Social networks create an environment where broadcasters have unlimited power to work with niche groups without geographic limits. The only limitations are those dictated by content owners and their associated content rights, as well as those entrenched in corporate culture who are preventing broadcasters from evolving into a New Media world.
Content Protection is a risk-to-cost balance. At the moment, the cost of piracy is low and the risk is low. There are no silver bullets to solving piracy, but steps can be taken to reduce levels to something more acceptable. It is untrue that everyone who pirates would be unwilling to buy the product legally. It is equally evident that every pirated copy does not represent a lost sale. If the risk is too high and the cost is set correctly, then fewer people will steal content. This paper explores how piracy has evolved over the past decades, and investigates issues surrounding copyright infringement in the entertainment industry.
About the Author
Gabriel Dusil was recently the Chief Marketing & Corporate Strategy Officer at Visual Unity, with a mandate to advance the company’s portfolio into next generation solutions and expand the company’s global presence. Before joining Visual Unity, Gabriel was the VP of Sales & Marketing at Cognitive Security, and Director of Alliances at SecureWorks, responsible for partners in Europe, Middle East, and Africa (EMEA). Previously, Gabriel worked at VeriSign & Motorola in a combination of senior marketing & sales roles. Gabriel obtained a degree in Engineering Physics from McMaster University, in Canada and has advanced knowledge in Online Video Solutions, Cloud Computing, Security as a Service (SaaS), Identity & Access Management (IAM), and Managed Security Services (MSS).
Connected TV, Digital Video, Online Video, Gabriel Dusil, Internet Video, Broadcast, Linear Broadcast, Multi-screen, Multiscreen, New Media, Online Video Platform, OTT, Over the Top Content, OVP, Smart TV, Social TV, Visual Unity, UHD, H.265, H.264, Ultra HD, 4K, 8K, 16K, NHK, RED Epic, RED Dragon, RED One, Sony F65, Canon EOS C500, Mysterium, OLED, HDMI, CES, Consumer Electronics Show, IMAX, HEVC, High Efficiency Video Coding, flexible display, IMAX
[x] “To the annoyance of collectors that replaced their DVD’s with Blu-Rays … 4K will make these collections obsolete in one fell-swoop. Such is progress.”
[xxii] “Most certainly, when our walls become gigantic TV monitors … then HD will be passé, and enthusiasts will be demanding even higher resolutions.”
The era of multiscreen video has begun. Portability and connectivity are changing the video landscape. TV everywhere and other multiscreen initiatives are fundamentally changing the entertainment business model, with apps streaming live to TVs, computers, tablets, and mobile phones. According to the latest forecasts from Informa, the global online-video market will be worth $37 billion in 2017, driven by the popularity of OTT (Over the Top services). Broadcasters, content owners, and distributors must engage multiscreen delivery to survive. This presentation explores these market trends, and integrated solutions that bridge the gap between the broadcast world and multiscreen consumption.
Online video providers today have the ability to experience a one-to-one conversation with their audience, compared to the somewhat anonymous nature of this relationship in traditional TV. Viewing habits of consumers continue to rapidly change in the next ten years, bringing more choice, portability and accessibility to video. A granular nature to analyzing subscriber behavior will open new opportunities for content owners, end users, and everyone in between. This will require accompanying changes in advertising expenditure as it pertains to a global vs. local focus. In the global reach of video, due to the ubiquity of the Internet, online services will need optimize to capitalize on new market opportunities.
Creating a compelling and engaging video experience has been an ongoing mission for content owners and distributors; Whether it was the introduction of CinemaScope[1] in 1953 to stifle the onslaught of color TV[2], or the introduction of 3D films[3] in the 50’s, the 80’s, and its subsequent re-introduction in 2009 with the launch of Avatar[4], to 4K Ultra high definition (UHD[5]) TV, and retina[6] quality video. In every way, gauging video quality has been a subjective exercise for consumers and experts alike.
Figure i – Visual Representation of calculating Qf
Beyond the signal to noise ratio (SNR[7]) measurement used to compare different compression ratios or codecs, in many cases only a trained eye would notice errors such as compression artifacts[8], screen tearing[9], or telecine judder[10] – unless they were persistent.
A modest metric to assess a video file’s compression density is the Quality factor (Qf[11]). In fact, the name is misleading since it is not actually a measure of quality, but an indication of video compression using three parameters: bitrate, the number of pixels in the frame, and the overall frame-rate of the video. Qf is essentially a measure of, “the amount of data allocated to each pixel in the video” [12]. This metric doesn’t take into account the type of compression profile used, the number of passes originally utilized in the encoding process[13], or any tweaks implemented by the encoding engineer to optimize the video quality. So Qf, or compression density, is just a baseline guide for an administrator that is responsible for transcoding or managing large video libraries.
The accompanying table shows a comparison of Qf using nominal figures for DVD, Blu-Ray and the recently ratified H.265 codec (aka. High Efficiency Video Coding, HEVC[14]). As the compression standard used for encoding the video improves, this corresponds to a reduced Qf.
Although Qf may be considered an inaccurate measure of video compression quality, where it becomes valuable is during the video encoding[15] or transcoding[16] stage – especially when multiple videos are required for processing, and an administrator has the option to choose consistency in the profile used and all related sub-parameters. Choosing a single Qf in this case will ensure global uniformity of compression density across the entire library. There are several internet forum discussions on the optimum quality that should be used for encoding (or an ideal Qf). Realistically, every video has its own unique and optimum settings. Finding this balance for each individual video would be impractical. For this reason, grouping video libraries by genre, or content type, then using a Qf for each group is a more reasonable compromise. For instance, corporate presentations, news casts, medical procedures – basically any type of recording with a lot of static images – could be compressed with the same Qf. The corresponding file for these videos could be as small as 1/20th the size of a typical Blu-Ray movie, with no perceivable loss in video quality.
Table I – Comparing Qf for MPEG2, H.264 & H.265[17]
As shown in the table, the Qf metric is useful in showing that a 1080p movie using the MPEG2 codec (aka. H.262 under the ITU definition) at 16.7GB (Gigabytes[18]) of storage (with a Qf = 0.33), compares equally to 10GB using H.264 (Qf = 0.20). Or in the case of H.265 a file size of 6GB (Qf = .6) again maintains the same quality. This is because each of these codecs significantly improves the efficiency on the previous one, while maintaining the same level of perceived video quality.
Figure ii – Visual representation of Video Compression standards & relative bandwidth requirements[19]
Ascertaining a video’s compression density can be achieved using MediaInfo[20], an open-source software package. This utility is an excellent resource in determining the formatting and structure of a given video file. MediaInfo displays a plethora of metadata and related details of the media content in a well laid-out overview. This includes granular structure of the audio, video, and subtitles of a movie. The layout of the data can even be customized using HTML and entire directories can be exported as part of a media library workflow. It’s an indispensable resource for content owners and subscribers that are managing large multimedia databases.
Figure iii – Snapshot of MediaInfo showing a video’s Structural Metadata
The H.264 codec (MPEG 4 AVC[21], or Microsoft’s own VC1[22]) improved on the efficiency of MPEG2[23] codec, developed in 1995, by around 40% to 50%. Although H.264 was created in 1998 it didn’t reach mainstream until Blu-Ray was officially launched in 2006. The H.265 standard, currently promises a similar 35% to 50% improvement in efficiency[24]. So when MPEG2 needs 10Mbps to transmit a video, an H.264 codec could send the same file, in the same quality at 6Mbps. H.265 can achieve the same at 3.6Mbps. The trade-off in using H.265 is two to ten times higher computational power over H.264 for encoding. So expect video encoding to take up to ten times longer to encode when using today’s processor. Thankfully devices will need only a two to three times increase in CPU strength to decode the video.
The new H.265 standard ushers in multiple levels of cost savings. At a storage level, costs saving of 40% would be significant for video libraries hosted in any cloud. Content hosting facilities or CDNs (content delivery networks[25]) are costly endeavor at the moment, for many clients. It may be argued that storage costs are a commodity, but when media libraries are measured in Petabytes[26] then these capital cost savings help the bottom line by using newer and more efficient codecs. Also, bandwidth costs will play an important role in further savings. Many online video platforms charge subscribers for the number of gigabytes leaving their facilities. Halving those costs by using H.265 would have a significant impact on monthly operational costs. On the flip side, video processing costs will increase in the short term, due to stronger and more expensive CPU power needed at both the encoding and decoding stages. Existing hardware will likely be used to encode H.265 in the short term, at the expense of time. But dedicated hardware will be needed for any extensive transcoding exercises, or real-time transcoding services.
Subscription-based internet services significantly compress their video content compared to their Blu-Ray counterparts. It’s a practical trade-off between video quality and bandwidth savings. But the quality of video only becomes a factor on certain consumer devices which can show the deficiencies of a highly compressed video. For example, a 60” (inches diagonal) plasma screen has the resolution to reveal a codec’s compression artifacts, but for a TV less than 40”, these artifacts would be hardly noticeable to the average consumer. For the most part, a 1080p title is barely distinguishable in quality to 720p on even a medium-sized television. Likewise, for many views watching on a majority of mobile device, high resolution content is both overkill and costly.
For those with bandwidth caps, subscribers are charged for all streaming data reaching their smartphone, whether they experience the highest quality video or not. Any video data sent exceeding the capability of a consumer device is a waste of money
At the moment video playback on mobile devices still poses a challenge for high definition. Thanks to multi-core processing on smartphones consumers are on the brink of having enough power to play full HD video, and can even run other processor intensive tasks in the background. Although quad-core[28] processors such as the Cortex A15 from ARM[29] and nVidia’s Tegra 4[30] (also based on the ARM architecture) have the ability to play some 1080p video, they will still struggle to play a wide library of full HD content without requiring some level of transcoding to lower profiles. 2013 is ushering in a wide range of handsets claiming 1080p support from HTC, Huawei, Sony, Samsung, and ZTE[31]. Multicore GPU and CPU running at ultra-low power requirements are asserting mobile devices as a viable platform for 1080p.
In the meantime, the resilience of H.264 and H.265 is in their use of encoding profiles (eg. baseline, main, or high and all associated sub-levels). The use of different profiles ensures that the best quality video experience is delivered within the limitations of the device playing the video. Low profile’s such as baseline require minimal processing power but do not efficiently compress the video. High profile modes are highly efficient and squeeze video file size as small as possible. Thus bandwidth is used efficiently, but requires higher processing power of the end-device to decode the video. Although the latest Apple iOS[32] devices support high profile, most smartphones still use lower profiles to ensure wider device compatibility. In the interim, internet video providers continue to encode titles into multiple profiles to suit a wide range of subscriber devices, accommodate their limitations in decoding capabilities, and maximize each individual viewing experience.
Higher profiles in H.265 will also have an effect on consumer electronics (CE[33]) equipment. Current iterations of these appliances are not equipped to handle the required processing demands of H.265. The next generation Home Theater PC (HTPC[34]), Set Top Box (STB[35]), or Media Player[36], will require upgrades their processing engines to accommodate these next generation codecs. Lab testing is still required to showcase that next generation computer processors will have the ability to decode H.265 at higher bit depth (eg. 10 bit), and resolutions as high as 4K resolutions. Some estimates state that 4K using H.265 will require 80 time more horsepower compared to HD using H.264[45].
To further compensate for the vast differences in mobile coverage, and best-effort internet communications, Over the Top (OTT)[37] providers, and Online Video Providers (OVP)[38] are offering advanced video optimization features such as Adaptive Bitrate Streaming (ABS)[39]. This is a solution to optimize video quality sent in real-time. Protocols such as Apple’s HLS[40], and more recently MPEG-DASH[41] have been developed to provide a universal approach to implementing adaptive bitrates.
The need for Adaptive Bitrate Streaming and related techniques is just a stop-gap requirement. As quality of service improves and bandwidth speeds increase, the need for optimization techniques will diminish. In some regions these techniques may completely disappear. Certainly, during the days of the analog modem, bandwidth was at a premium, so compression techniques and sophisticated error correction methods were used to maximize data throughput while also saving costs for the last-mile[42]. As bandwidth increased, these line adaption features were no longer deemed necessary. Similarly, the need for bandwidth optimization techniques will be diluted in regions where mobile 4G LTE[43] (Long-Term Evolution) will become ubiquitous. Speeds will become so reliable that even the internet’s best-effort[44] will be sufficient to deliver multiple 4K videos, in real time, to any device.
In the days of linear television, broadcasters had a difficult task in understanding their audience. Without a direct broadcasting and feedback mechanism like the Internet, gauging subscriber behavior was slow. Today, online video providers have the ability to conduct a one-to-one conversation with their audience. Viewing habits of consumers will continue to rapidly change in the next ten years. This will require changes in advertising expenditure and tactics.
The evolution from traditional TV viewing to online video has been swift. This has significantly disrupted disc sales such as DVD and Blu-Ray, as well as cable and satellite TV subscriptions. With the newfound ability to consume content anytime, anywhere, and on any device, consumers are re-evaluating their spending habits. In this paper we will discuss these changes in buying behavior, and identify the turning point of these changes.
Transcoding large video libraries is a time consuming and expensive process. Maintaining consistency in video quality helps to ensure that storage costs and bandwidth are used efficiently. It is also important for video administrators to understand the types of devices receiving the video so that subscribers can enjoy an optimal viewing experience. This paper discusses the differences in quality in popular video codecs, including the recently ratified H.265 specification.
IV. Search & Discovery Is a Journey, not a Destination
Television subscribers have come a long way from the days of channel hopping. The arduous days of struggling to find something entertaining to watch are now behind us. As consumers look to the future, the ability to search for related interests and discover new interests is now established as common practice. This paper discusses the challenges that search and discovery engines face in refining their services in order to serve a truly global audience.
V. Multiscreen Solutions for the Digital Generation
Broadcasting, as a whole, is becoming less about big powerful hardware and more about software and services. As these players move to online video services, subscribers will benefit from the breadth of content they will provide to subscribers. As the world’s video content moves online, solution providers will contribute to the success of Internet video deployments. Support for future technologies such as 4K video, advancements in behavioral analytics, and accompanying processing and networking demands will follow. Migration to a multiscreen world requires thought leadership and forward-thinking partnerships to help clients keep pace with the rapid march of technology. This paper explores the challenges that solution providers will face in assisting curators of content to address their subscriber’s needs and changing market demands.
VI. Building a Case for 4K, Ultra High Definition Video
Ultra High Definition technology (UHD), or 4K, is the latest focus in the ecosystem of video consumption. For most consumers this advanced technology is considered out of their reach, if at all necessary. In actual fact, 4K is right around the corner and will be on consumer wish lists by the end of this decade. From movies filmed in 4K, to archive titles scanned in UHD, there is a tremendous library of content waiting to be released. Furthermore, today’s infrastructure is evolving and converging to meet the demands of 4K, including Internet bandwidth speeds, processing power, connectivity standards, and screen resolutions. This paper explores the next generation in video consumption and how 4K will stimulate the entertainment industry.
Social TV brings viewers to content via effective brand management and social networking. Users recommend content as they consume it, consumers actively follow what others are watching, and trends drive viewers to subject matters of related interests. The integration of Facebook, Twitter, Tumblr and other social networks has become a natural part of program creation and the engagement of the viewing community. Social networks create an environment where broadcasters have unlimited power to work with niche groups without geographic limits. The only limitations are those dictated by content owners and their associated content rights, as well as those entrenched in corporate culture who are preventing broadcasters from evolving into a New Media world.
Content Protection is a risk-to-cost balance. At the moment, the cost of piracy is low and the risk is low. There are no silver bullets to solving piracy, but steps can be taken to reduce levels to something more acceptable. It is untrue that everyone who pirates would be unwilling to buy the product legally. It is equally evident that every pirated copy does not represent a lost sale. If the risk is too high and the cost is set correctly, then fewer people will steal content. This paper explores how piracy has evolved over the past decades, and investigates issues surrounding copyright infringement in the entertainment industry.
About the Author
Gabriel Dusil was recently the Chief Marketing & Corporate Strategy Officer at Visual Unity, with a mandate to advance the company’s portfolio into next generation solutions and expand the company’s global presence. Before joining Visual Unity, Gabriel was the VP of Sales & Marketing at Cognitive Security, and Director of Alliances at SecureWorks, responsible for partners in Europe, Middle East, and Africa (EMEA). Previously, Gabriel worked at VeriSign & Motorola in a combination of senior marketing & sales roles. Gabriel obtained a degree in Engineering Physics from McMaster University, in Canada and has advanced knowledge in Online Video Solutions, Cloud Computing, Security as a Service (SaaS), Identity & Access Management (IAM), and Managed Security Services (MSS).
[11] Originally used in Gordian Knot, http://sourceforge.net/projects/gordianknot/, an open source project for encoding videos into DivX and XviD formats. This software is no long being developed.
[13] With multi-pass encoding, the encoder becomes aware that some static parts of the video can be encoded with lower bitrates compared to complex scenes requiring higher bitrates. This knowledge encodes the video more efficiently, but requires higher processing resources and time to complete the task.
[17] This table shows the typical frame size for MPEG2, H.264 and H.265. For consistency and for the sake of comparison, a frame aspect ratio of 16:9 is shown. The Cinemascope[17] frame size of 2.39:1 or 2.35:1 would further alter the figures. The table also does not take into account the audio channel, which roughly amounts to a 10% increase in bitrate and file size (when a similar quality codex are used in each instance). Also not under consideration is a pixel bit depth higher than 8, such as in professional video recording, and common frame rates of 25, 29.97, 30 or 50fps are not considered.
[24] Studies have shown a 39-44% improvement in efficiency over H.264. Joint Collaborative Team on Video Coding (JCT-VC), “Comparison of Compression Performance of HEVC Working Draft 4 with AVC High Profile”