Before answering this question, it’s worth looking back into the evolution of display technologies.
Higher resolution displays have been typically linked to larger screen sizes. For instance, throughout most of the 90’s the sweet spot for standard definition (SD) broadcast was around 30” (30 inches/76cm diagonal). Then the sweet spot for high definition (HD) grew to around 50″. For 4K Ultra HD (UHD), displays appear to be establishing their sweet spot at around 80″. So as monitors get larger, we need more pixels to fill in the additional screen space that consumers have purchased. At the same time, our eyes are expecting higher resolutions and higher quality as technology improves.
But this doesn’t necessarily imply that our viewing distance is changing. If ten feet (around three meters) was the typical sitting distance from televisions throughout most of SD’s existence, this vantage point hasn’t changed for 50” and 80” TVs. Living room sizes certainly haven’t grown at the same proportion to screen sizes. What is changing is an increased pixel count enjoyed by the viewer. With larger televisions, our viewing angles are rivaling that of cinema, even if our living rooms are significantly smaller. Although the ideal viewing angle varies per consumer, the sweet spot for an immersive experience converges on a viewing angle of approximately 40° (measured from one’s eyes to either edge of the screen)[i].
Consumers enjoying a cinematic experience in their own homes may partially explain the gradual decline in cinema goers over the past decade. Larger displays coupled with high-quality surround sound in the living room now mimic the same immersive experience as a movie theater.
Meanwhile, on the second screen, consumers are acclimating to higher resolution displays. Apple popularized the notion of the retina display which can now be found on many smartphones, tablets, monitors, and laptops. The market is following suit, as shown by recent announcements at CES ’14 in Las Vegas, with 4K displays reaching and exceeding 100” (2.5 meters in diagonal).
Possibly by the time 8K UHD monitors arrive to market, we will have 120” displays hanging on our walls as light as picture frames. Or better yet, the wall itself will be an 8K monitor, and we will mount them like wallpaper.
Regarding how 4K will be initially introduced to consumers, early adopters have already shown interest, with OTT providers such as Netflix announcing their plans for introducing 4K content in 2014. Even their hit show, House of Cards[ii], was filmed, edited, and mastered in 4K. In the meantime, subscribers can test 4K content on their portal with sample footage from Netflix.
One lingering question that is consistently raised by the media is the lack of 4K content. In fact, there is plenty of 4K content; it’s just not accessible to the general public. Thousands of movies have already been filmed using camera resolutions between 4K and 6K thanks to pioneers like RED digital cameras which announced their Red One camera in 2006. Furthermore, many movies shot on film have been digitally scanned in 4K. So there is definitely no shortage of 4K content. As 4K becomes mainstream then, these libraries will be progressively released to market, similar to (or maybe even faster than) the speed of Blu-Ray releases over the past seven years.
OTT providers are positioning themselves as early adopters of 4K through Video on Demand (VoD services). OTT providers are the obvious candidates for adopting 4K because they can utilize steady improvements in Internet speeds to transmit such demanding bandwidth. Initial deployments of 4K OTT may require a hefty buffer to play the video in a download-then-play approach (if the OTT provider allows for it). True live and uninterrupted playback will take a bit longer since 4K currently needs around 24-40 mbps of bandwidth when using the existing H.264 codec. With less than 24 mbps, it will be difficult for many subscribers to showcase the benefits of streamed 4K. This is expected to improve once H.265 is deployed, which anticipates around half the bandwidth, as providers are looking to implement 4K between 12 and 20 mbps.
Computing power will need to be higher for decoding 4K content. There are no consumer electronic (CE) appliances at the moment that can decode H.265 4K, although high-end desktop computers and existing GPUs (graphics processors) have the power to do the job. It’s just a matter of time before high-powered, low-cost processors will be available for mass-market distribution in CE appliances.
Finally, 4K OTT will initially need adaptive bitrate (ABR) capabilities to minimize subscriber frustrations that lack the appropriate bandwidth. Early deployments of the service may be a little bumpy for 4K OTT and may result in a lot of customer complaints. So service providers will need to be hyper-sensitive to maximizing quality of service (QoS) during the initial stages of a 4K service launch. Eventually, the entire supply chain will align to remove any bottlenecks – from the cloud down to the consumer. This includes bandwidth speeds, processor capacity, and optimized H.265 encoding.
In summary, 4K will be adopted by video enthusiasts that want an immersive theater experience in their living room. 4K content will reach the home as content owners release their libraries, and OTT providers will likely be the first to deliver the service to their subscribers. This content can be encoded using the latest video encoding standard, H.265, and sent through high-bandwidth Internet connections reaching and exceeding 20 mbps.
• Understanding the entertainment market from ten thousand meters helps industry executives make strategic decisions. This leads to tactical initiatives that drive innovation, new services, and revenue growth. This Q&A series takes a top level view of today’s digital landscape and helps decision makers navigate through the latest technologies and trends in digital video. Gabriel Dusil, Chief Marketing & Corporate Strategy Officer from Visual Unity, discusses the ongoing developments in Over the Top (OTT) services, how these platforms are helping to shape today’s digital society, and addresses the evolving changes in consumer behavior. Topics include 2nd Screen, 4K Ultra High Definition video, H.265 HEVC, global challenges surrounding content distribution, and the future of OTT.
• About Gabriel Dusil
Gabriel Dusil is the Chief Marketing and Corporate Strategy Officer at Visual Unity, with a mandate to advance the company’s portfolio into next generation solutions and expand the company’s global presence. Before joining Visual Unity, Gabriel was the VP of Sales & Marketing at Cognitive Security, and Director of Alliances at SecureWorks, responsible for partners in Europe, the Middle East, and Africa (EMEA). Previously, Gabriel worked at VeriSign and Motorola in a combination of senior marketing and sales roles. Gabriel obtained a degree in Engineering Physics from McMaster University in Canada and has advanced knowledge in Online Video Solutions, Cloud Computing, Security as a Service (SaaS), Identity and Access Management (IAM), and Managed Security Services (MSS).
• 2nd Screen, 4K, Broadcast, Connected TV, Digital Rights, Digital Video, DRM, Gabriel Dusil, H.264, H.265, HEVC, Internet Video, Linear Broadcast, Linear TV, Multi-screen, Multiscreen, New Media, Online Video, Online Video Platform, OTT, Over the Top Content, OVP, Recommendation Engine, Search & Discovery, Search and Discovery, second screen, Smart TV, Social TV, TV Everywhere, UHD, Ultra HD, Ultra High Definition, Visual Unity
Some broadcasters see OTT as a threat at the moment, mainly due to the observed loss of control of their subscriber base. For example, while today’s consumers are watching content on their living room TV, they are also simultaneously tweeting, ‘liking’ and surfing the Internet. They are commenting on what they are watching and discovering complementary content. They are researching information on an actor, athlete, or television personality. Or they are simply checking their email. It’s happening in parallel on a second screen such as a tablet or a smartphone, and all of this activity is out-of-band to the broadcast signal. For some broadcasters, this is viewed as losing control of their subscribers because they are not controlling that 2nd screen and can’t monitor what the consumer is doing on that device.
At the same time, some broadcasters are identifying the second screen as an opportunity to further engage the subscriber in live content. By providing complementary content in parallel to live programming, broadcasters are engaging in 2nd screen to wrest back this control. One example is the award-winning AMC show,The Walking Dead, which broadcasts complementary content over the Internet while the show is being aired.
It’s fair to say that a lot of activity by broadcasters on the 2nd screen is still experimental, but we continue to advocate experimentation. This is how the industry will optimize subscriber engagement, make it more personal, and refine their experience in computing and in developing new and engaging applications.
Second screen is an opportunity because it can drive new revenue streams from broadcasters; not only for advertising revenue, but also for introducing subscribers to new content using recommendation engines, social networking, and responsive design.
• What are the challenges for OTT moving forward?
One challenge for OTT is in its global expansion. Namely, in the content service provider’s ability to obtain the appropriate rights of foreign content for resale in their local market. The challenge they face is in balancing the serviceable market for OTT against the cost of licensing rights from the USA, UK, or other foreign studios. This is further complicated by multi-device restrictions which can be used to consume the content. Some markets simply don’t have the capital to purchase premium titles from the likes of Hollywood and expect to get a profitable return on investment in a local market that does not have a sizable subscriber count. Some of these markets just don’t have high enough purchasing power to justify the subscription fees required to cover the upfront cost of an entertainment library.
Secondly, OTT needs to have a compelling user interface and user experience (UI/UX). It’s fair to say that content is still king. That has not changed. The basis here is that it’s not just the content that needs to be immersive and engaging – it’s the entire ecosystem surrounding it. When consumers go to a concert or live sports event, what do they remember? It’s not just how great the band or the sporting event was, but the spectacle and energy of the fans. That’s what is unforgettable – the environment is the kingdom. Content is still king, but the kingdom needs to be engaging and personal. In the context of an OTT service this is a virtual environment, but the same principle applies. The environment needs to be engaging and fun, not just the content itself. In markets where content is plentiful, then the competitive differentiator is in a compelling UI/UX.
Thirdly, Digital Rights Management needs to be seamless and portable. Certainly content needs to be protected, and today’s DRM solutions serve this need. But there is a sensitive balance between protecting the content and ease of use. DRM needs to evolve where content can be purchased once and remains portable between any operating system or device.
Coca Cola recently redid their website saying that “content is social at the core, digital by design, and emotional.” Coca Cola may not be an entertainment company per se, but that message speaks directly to the entertainment industry.
• Synopsis
• Understanding the entertainment market from ten thousand meters helps industry executives make strategic decisions. This leads to tactical initiatives that drive innovation, new services, and revenue growth. This Q&A series takes a top level view of today’s digital landscape and helps decision makers navigate through the latest technologies and trends in digital video. Gabriel Dusil, Chief Marketing & Corporate Strategy Officer from Visual Unity, discusses the ongoing developments in Over the Top (OTT) services, how these platforms are helping to shape today’s digital society, and addresses the evolving changes in consumer behavior. Topics include 2nd Screen, 4K Ultra High Definition video, H.265 HEVC, global challenges surrounding content distribution, and the future of OTT.
• About Gabriel Dusil
Gabriel Dusil is the Chief Marketing and Corporate Strategy Officer at Visual Unity, with a mandate to advance the company’s portfolio into next generation solutions and expand the company’s global presence. Before joining Visual Unity, Gabriel was the VP of Sales & Marketing at Cognitive Security, and Director of Alliances at SecureWorks, responsible for partners in Europe, the Middle East, and Africa (EMEA). Previously, Gabriel worked at VeriSign and Motorola in a combination of senior marketing and sales roles. Gabriel obtained a degree in Engineering Physics from McMaster University in Canada and has advanced knowledge in Online Video Solutions, Cloud Computing, Security as a Service (SaaS), Identity and Access Management (IAM), and Managed Security Services (MSS).
4K, Broadcast, Connected TV, Digital Rights, Digital Video, DRM, Gabriel Dusil, H.265, HEVC, Internet Piracy, Internet Video, Linear Broadcast, Linear TV, Multi-screen, Multiscreen, New Media, Online Video, Online Video Platform, OTT, Over the Top Content, OVP, Recommendation Engine, Search & Discovery, Search and Discovery, second screen, Smart TV, Social TV, TV Everywhere, Ultra HD, Ultra High Definition, Visual Unity
This interview with Gabriel Dusil, Chief Marketing and Corporate Strategy Officer at Visual Unity, was produced by the Czech Sales Academy, to educate students on sales and marketing best practices. This is one in a series of interviews with different senior executives across the Czech Republic, focusing on their experiences in the field of sales and marketing, after leaving the education system and entering the workforce. These sessions help to educate students in what to expect when working in these departments, as told from different sales and marketing perspectives.
About Gabriel Dusil
Gabriel Dusil is currently the Chief Marketing & Corporate Strategy Officer at Visual Unity, with a mandate to advance the company’s portfolio into next generation solutions and expand the company’s global presence. Before joining Visual Unity, Gabriel was the VP of Sales & Marketing at Cognitive Security, and Director of Alliances at SecureWorks, responsible for partners in Europe, Middle East, and Africa (EMEA). Previously,
Gabriel worked at VeriSign & Motorola in a combination of senior marketing & sales roles. Gabriel obtained a degree in Engineering Physics from McMaster University, in Canada and has advanced knowledge in Online Video Solutions, Cloud Computing, Security as a Service (SaaS), Identity & Access Management (IAM), and Managed Security Services (MSS).
Visual Unity is a global provider of video and digital media solutions, enabling our clients to deliver premium quality video content. Our clients can measure, analyze and optimize their libraries over time and achieve optimal business success. Our vuMedia™ platform inspires clients to deploy their assets across multiple devices, screens, and media formats. Visual Unity helps clients manage, deliver and monetize their digital content.
Extract from the Academy’s web site: “The “sales school system” was established in order to enable access to a high-quality specialized education for our students without having to pay the school fees during their studies. The whole concept is primarily aimed at the inhabitants of poorer regions of our country. This way the system allows us to choose the quality students who are motivated in the area of personal development and they are also keen to study… …Another important target is the English language education. Learning of English language is divided into two parts – the first, more intensive one, is comprised of the standard classes and the second one is realized during the weekends’ activities… …The students have a unique opportunity to practice English language in various everyday situations which they solve outside the classrooms.”
2nd Screen, Broadcast, Connected TV, Czech Sales Academy, Digital Video, Gabriel Dusil, Internet Video, Linear Broadcast, Linear TV, Multi-screen, Multiscreen, New Media, Online Video, Online Video Platform, OTT, Over the Top Content, OVP, Sales & Marketing, Sales and Marketing, Sales School System, second screen, Smart TV, Social TV, TV Everywhere, Visual Unity, Video Streaming, Internet Streaming
The Internet has truly changed the playing field of entertainment. With each company that shuts its doors, many more have opened to capitalize on this ever-evolving eMarketplace. The net effect has been certainly disruptive across the music, movie, broadcast, and gaming industries. Disruption can be discussed in a positive and negative context. Some industry proponents blame copyright infringement for their revenue decline. Others thank the unabated proliferation of their content through the Internet, in reaching an untapped global audience. Is the sky falling on the entertainment industry, or is it thriving? Should we be thanking the internet, or blame it. Is this a failure of legacy business models or is it just the evolution of technology? This presentation explores the effect that the Internet has had on the entertainment industry, and looks towards how subscriber behavior is advancing their consumption of entertainment.
View the recorded video presentation from IBC ‘13 at:
This session was presented by Gabriel Dusil, Senior VP of Marketing & Corporate Strategy at Visual Unity, and was broadcasted live at IBC ’13, in Amsterdam on the 14th of September 2013, via the Broadcast Show (http://www.broadcastshow.com/), and powered by TV Bay.
The era of multiscreen video has begun. Portability and connectivity are changing the video landscape. TV everywhere and other multiscreen initiatives are fundamentally changing the entertainment business model, with apps streaming live to TVs, computers, tablets, and mobile phones. According to the latest forecasts from Informa, the global online-video market will be worth $37 billion in 2017, driven by the popularity of OTT (Over the Top services). Broadcasters, content owners, and distributors must engage multiscreen delivery to survive. This presentation explores these market trends, and integrated solutions that bridge the gap between the broadcast world and multiscreen consumption.
Searching for content has significantly evolved in the past ten years, thanks largely to Google[1]. Consumers don’t even realize how much things have changed, and how fast we can find what we’re looking for. Those that are old enough to remember the 80’s TV experience or earlier, discovering new content was reliant on commercial previews to entice us to watch up-and-coming programs. The popularity of TV Guide[2] helped untether viewers from these teasers, and allowed searching for future programming schedules in a magazine format.
Regardless, tuning into broadcast TV was restricted to watching specific channels, during specific timeslots. Viewers needed to reserve that window in their daily schedule. Prime time[4] (between 8pm and 10pm) was established as the most lucrative timeslot in a channel’s 24 hour transmission. Broadcasters faced the constant challenge of juggling their content to optimal times, to suit the target audience – a practice that continues today.
The electronic programming guide (EPG[5]) gained traction throughout the 90’s and became a modern attempt to discover new interests. Instead of browsing a week’s programing in paper format, subscribers were doing so on the lower third[6] of their TV monitors. Although the notion of searching was still out of reach – especially in the context that consumers are familiar with today. EPG did not meet the depth of search sophistication that is expected in today’s digital generation. It also lacks modern search features that subscribers expect, such as content suggestions or peer-based recommendations.
Figure i – From Channel Clicking to “Googling”
The linear experience in TV broadcasting can be directly compared to how the PC experience entered the lives of consumers. When computers first came into the home, PC’s were arranged similar to television programs – in directories (akin to TV channels),and files (TV episodes) – albeit with greater flexibility and deeper tree structures.
Google helped fix that problem by significantly improving the search paradigm (hoping to remind us of how effective their algorithms are, Google always displays the duration of your search. Who isn’t impressed with 1.5 million hits in 0.2 seconds?) This approach led to software that would achieve similar search speed, accuracy, and ease of use on the PC. Users no longer had to remember where they put their files. All that was needed was to remember a word, or phrase that was used inside the document, and to a good level of confidence, the file would be found. The proficiency that web surfer’s achieved from googling[9], translated to the desktop. Gradually there was no longer a need for all those directories. The organization of digital lives changed from endlessly moving files around the computer, to ensuring they had meaningful file names and metadata[10] so that they could be easily searchable. Whether files were in one directory or one hundred, they could be found just as quickly.
Figure ii – Evolution of Video Consumption
Fast forward to today, and files have grown in size, quantity and frequency that are orders of magnitude higher than ten years ago. Content is accessible at any time anywhere, and on any device. Computers now consist of multimedia libraries – images, music, and videos. The need for metadata is even more important for these files because there is no inherent text from which to catalog them. Metadata comes in two forms:
Structural – For photos this may be the time-stamp of the photo, exposure, shutter speed, or geolocation where it was taken. For a video or audio file this may include the bitrate, overall duration, and compression codec used.
Descriptive – For photos this may include the names of people in the images, and the event being photographed, For music this would include the artist, song, and album titles. Or for movies this may include the actors, directors, and producers of the title. IMDB[11] (Internet Movie Database) is a good example of descriptive metadata for movie titles.
This embedded metadata forms the index needed to find multimedia files that do not have a textual base.
Search & discovery is now an industry in itself fueling its own revenue streams. Users can now discover interests which were previously inaccessible. Operating systems have followed suit. For example, modern iterations of Microsoft Windows 8[12] and Apple iOS[13] have a relatively flat structure from a user perspective. Programs are now accessible from the home screen or adjacent screens that are a swipe away. Deep directories and file structures are still there, but hidden behind an elegant front-end. Digging deep into our memories and remembering where we put something is now archaic.
Figure iii – The Challenges of Search & Discovery
To better understand the evolution of search, and the consumption of video we need to start with an understanding of user behavior. Subscriber needs have evolved to a more complex set of search parameters. These engines now juggle a dense set of algorithms to present results that appease the consumer’s behavior. To do this, multiple algorithms are at play:
Collaborative Behaviour[14] compares the subscriber’s past behaviour with other subscribers with similar activities, and clusters similar interests. In other words, users that liked Lord of the Rings[15] also liked Batman, the Dark Knight[16].
Content Based search ties related content together. If a user likes the director, Peter Jackson[17], then they may like the movie, King Kong[18], which he directed.
Recommendation engine[19] which takes behaviour to a more proactive model by asking subscribers for their opinion, then presenting their collective user ratings.
Statistical searches display cumulative totals, such as the number of views, Like’s or the amount of reviewers – suggesting a level of popularity for that content.
Figure iv – Collaborative Search Engine clusters
Due to the global nature of a video subscription service, a subscriber should be presented with search results that are relevant to their geolocation[21]. They also should not be bombarded with irrelevant or inaccurate results that cannot be monetized due to restrictions in rights management, censorship, or DRM[22] (digital rights management). This applies to any associated advertisements as well. In the same spirit as Google’s “Do You Feel Lucky”, subscribers want search results that are immediately relevant. Challenges facing modern search engines include:
Demographics and Culture add their own level of search complexity, as results are altered due to a content rights or censorship. Results may be filtered or not shown at all.
Advertisers also want to have their brands shown prominently – Whether it’s on a mobile, tablet, TV, or laptop – and have their products displayed adjusted to content that complements their brand.
This leads to a wider discussion is on the rights to censor results, freedom of speech, and the manipulation of search results to suit big brother[23]. How much leeway should be allowed in order to control search results in the presence of sponsors, political influences, media brands or internet governance?
Online search algorithms continue to face the challenge of increased accuracy. In the pursuit of excellence, Netflix ran an open contest in October 2006, to improve their collaborative filtering algorithm. The competition was awarded in September 2009 for 1 million US$ to BellKor’s Pragmatic Chaos[24] which improved the accuracy of the Netflix algorithm by 10.06%., After thousands of man-hours, the algorithm was never implemented, due mainly to the implementation costs involved, according to Netflix sources[25].
Search engines and their related services may already be considered a mature market but there is still room for improvement. Several online music services allow subscribers to find artists of similar interest through graphical means similar to collaborative engines[26]. Google recently enabled the ability to drag a photo onto their search bar so that similar images could be found[27]. IMDB has mapped 2.3 million titles. The value of monetizing such a database was recognized by Amazon.com, resulting in their acquisition of IMDB in 1998. The beneficiaries of these search and discovery improvements continue to be subscribers.
Future challenges include a continuing correlation of as may data points as available. Then making sense of the results. How can peer suggestions correlate better with statistics, past viewing, and collaborative results? How can search data correlate better with purchase behavior, and the overall personal profile of the subscriber? How can video consumption map to musical or photographic interests? How can search better integrate with personal verses business interests? Could a discovery engine reach a level of sophistication that offers search suggestion better than a close friend? Then again, would we want that level of intimacy with a computer algorithm?
This may be a lot of questions to ask at the end of an article, but isn’t that the foundation of search and discovery?
In the days of linear television, broadcasters had a difficult task in understanding their audience. Without a direct broadcasting and feedback mechanism like the Internet, gauging subscriber behavior was slow. Today, online video providers have the ability to conduct a one-to-one conversation with their audience. Viewing habits of consumers will continue to rapidly change in the next ten years. This will require changes in advertising expenditure and tactics.
The evolution from traditional TV viewing to online video has been swift. This has significantly disrupted disc sales such as DVD and Blu-Ray, as well as cable and satellite TV subscriptions. With the newfound ability to consume content anytime, anywhere, and on any device, consumers are re-evaluating their spending habits. In this paper we will discuss these changes in buying behavior, and identify the turning point of these changes.
Transcoding large video libraries is a time consuming and expensive process. Maintaining consistency in video quality helps to ensure that storage costs and bandwidth are used efficiently. It is also important for video administrators to understand the types of devices receiving the video so that subscribers can enjoy an optimal viewing experience. This paper discusses the differences in quality in popular video codecs, including the recently ratified H.265 specification.
IV. Search & Discovery Is a Journey, not a Destination
Television subscribers have come a long way from the days of channel hopping. The arduous days of struggling to find something entertaining to watch are now behind us. As consumers look to the future, the ability to search for related interests and discover new interests is now established as common practice. This paper discusses the challenges that search and discovery engines face in refining their services in order to serve a truly global audience.
V. Multiscreen Solutions for the Digital Generation
Broadcasting, as a whole, is becoming less about big powerful hardware and more about software and services. As these players move to online video services, subscribers will benefit from the breadth of content they will provide to subscribers. As the world’s video content moves online, solution providers will contribute to the success of Internet video deployments. Support for future technologies such as 4K video, advancements in behavioral analytics, and accompanying processing and networking demands will follow. Migration to a multiscreen world requires thought leadership and forward-thinking partnerships to help clients keep pace with the rapid march of technology. This paper explores the challenges that solution providers will face in assisting curators of content to address their subscriber’s needs and changing market demands.
VI. Building a Case for 4K, Ultra High Definition Video
Ultra High Definition technology (UHD), or 4K, is the latest focus in the ecosystem of video consumption. For most consumers this advanced technology is considered out of their reach, if at all necessary. In actual fact, 4K is right around the corner and will be on consumer wish lists by the end of this decade. From movies filmed in 4K, to archive titles scanned in UHD, there is a tremendous library of content waiting to be released. Furthermore, today’s infrastructure is evolving and converging to meet the demands of 4K, including Internet bandwidth speeds, processing power, connectivity standards, and screen resolutions. This paper explores the next generation in video consumption and how 4K will stimulate the entertainment industry.
Social TV brings viewers to content via effective brand management and social networking. Users recommend content as they consume it, consumers actively follow what others are watching, and trends drive viewers to subject matters of related interests. The integration of Facebook, Twitter, Tumblr and other social networks has become a natural part of program creation and the engagement of the viewing community. Social networks create an environment where broadcasters have unlimited power to work with niche groups without geographic limits. The only limitations are those dictated by content owners and their associated content rights, as well as those entrenched in corporate culture who are preventing broadcasters from evolving into a New Media world.
Content Protection is a risk-to-cost balance. At the moment, the cost of piracy is low and the risk is low. There are no silver bullets to solving piracy, but steps can be taken to reduce levels to something more acceptable. It is untrue that everyone who pirates would be unwilling to buy the product legally. It is equally evident that every pirated copy does not represent a lost sale. If the risk is too high and the cost is set correctly, then fewer people will steal content. This paper explores how piracy has evolved over the past decades, and investigates issues surrounding copyright infringement in the entertainment industry.
About the Author
Gabriel Dusil was recently the Chief Marketing & Corporate Strategy Officer at Visual Unity, with a mandate to advance the company’s portfolio into next generation solutions and expand the company’s global presence. Before joining Visual Unity, Gabriel was the VP of Sales & Marketing at Cognitive Security, and Director of Alliances at SecureWorks, responsible for partners in Europe, Middle East, and Africa (EMEA). Previously, Gabriel worked at VeriSign & Motorola in a combination of senior marketing & sales roles. Gabriel obtained a degree in Engineering Physics from McMaster University, in Canada and has advanced knowledge in Online Video Solutions, Cloud Computing, Security as a Service (SaaS), Identity & Access Management (IAM), and Managed Security Services (MSS).
Connected TV, Digital Video, Gabriel Dusil, Internet Video, Linear Broadcast, Linear TV, Multi-screen, Multiscreen, New Media, Online Video Platform, OTT, Over the Top Content, OVP, Search & Discovery, Search and Discovery, second screen, Smart TV, Social TV, Visual Unity
Online video providers today have the ability to experience a one-to-one conversation with their audience, compared to the somewhat anonymous nature of this relationship in traditional TV. Viewing habits of consumers continue to rapidly change in the next ten years, bringing more choice, portability and accessibility to video. A granular nature to analyzing subscriber behavior will open new opportunities for content owners, end users, and everyone in between. This will require accompanying changes in advertising expenditure as it pertains to a global vs. local focus. In the global reach of video, due to the ubiquity of the Internet, online services will need optimize to capitalize on new market opportunities.
Creating a compelling and engaging video experience has been an ongoing mission for content owners and distributors; Whether it was the introduction of CinemaScope[1] in 1953 to stifle the onslaught of color TV[2], or the introduction of 3D films[3] in the 50’s, the 80’s, and its subsequent re-introduction in 2009 with the launch of Avatar[4], to 4K Ultra high definition (UHD[5]) TV, and retina[6] quality video. In every way, gauging video quality has been a subjective exercise for consumers and experts alike.
Figure i – Visual Representation of calculating Qf
Beyond the signal to noise ratio (SNR[7]) measurement used to compare different compression ratios or codecs, in many cases only a trained eye would notice errors such as compression artifacts[8], screen tearing[9], or telecine judder[10] – unless they were persistent.
A modest metric to assess a video file’s compression density is the Quality factor (Qf[11]). In fact, the name is misleading since it is not actually a measure of quality, but an indication of video compression using three parameters: bitrate, the number of pixels in the frame, and the overall frame-rate of the video. Qf is essentially a measure of, “the amount of data allocated to each pixel in the video” [12]. This metric doesn’t take into account the type of compression profile used, the number of passes originally utilized in the encoding process[13], or any tweaks implemented by the encoding engineer to optimize the video quality. So Qf, or compression density, is just a baseline guide for an administrator that is responsible for transcoding or managing large video libraries.
The accompanying table shows a comparison of Qf using nominal figures for DVD, Blu-Ray and the recently ratified H.265 codec (aka. High Efficiency Video Coding, HEVC[14]). As the compression standard used for encoding the video improves, this corresponds to a reduced Qf.
Although Qf may be considered an inaccurate measure of video compression quality, where it becomes valuable is during the video encoding[15] or transcoding[16] stage – especially when multiple videos are required for processing, and an administrator has the option to choose consistency in the profile used and all related sub-parameters. Choosing a single Qf in this case will ensure global uniformity of compression density across the entire library. There are several internet forum discussions on the optimum quality that should be used for encoding (or an ideal Qf). Realistically, every video has its own unique and optimum settings. Finding this balance for each individual video would be impractical. For this reason, grouping video libraries by genre, or content type, then using a Qf for each group is a more reasonable compromise. For instance, corporate presentations, news casts, medical procedures – basically any type of recording with a lot of static images – could be compressed with the same Qf. The corresponding file for these videos could be as small as 1/20th the size of a typical Blu-Ray movie, with no perceivable loss in video quality.
Table I – Comparing Qf for MPEG2, H.264 & H.265[17]
As shown in the table, the Qf metric is useful in showing that a 1080p movie using the MPEG2 codec (aka. H.262 under the ITU definition) at 16.7GB (Gigabytes[18]) of storage (with a Qf = 0.33), compares equally to 10GB using H.264 (Qf = 0.20). Or in the case of H.265 a file size of 6GB (Qf = .6) again maintains the same quality. This is because each of these codecs significantly improves the efficiency on the previous one, while maintaining the same level of perceived video quality.
Figure ii – Visual representation of Video Compression standards & relative bandwidth requirements[19]
Ascertaining a video’s compression density can be achieved using MediaInfo[20], an open-source software package. This utility is an excellent resource in determining the formatting and structure of a given video file. MediaInfo displays a plethora of metadata and related details of the media content in a well laid-out overview. This includes granular structure of the audio, video, and subtitles of a movie. The layout of the data can even be customized using HTML and entire directories can be exported as part of a media library workflow. It’s an indispensable resource for content owners and subscribers that are managing large multimedia databases.
Figure iii – Snapshot of MediaInfo showing a video’s Structural Metadata
The H.264 codec (MPEG 4 AVC[21], or Microsoft’s own VC1[22]) improved on the efficiency of MPEG2[23] codec, developed in 1995, by around 40% to 50%. Although H.264 was created in 1998 it didn’t reach mainstream until Blu-Ray was officially launched in 2006. The H.265 standard, currently promises a similar 35% to 50% improvement in efficiency[24]. So when MPEG2 needs 10Mbps to transmit a video, an H.264 codec could send the same file, in the same quality at 6Mbps. H.265 can achieve the same at 3.6Mbps. The trade-off in using H.265 is two to ten times higher computational power over H.264 for encoding. So expect video encoding to take up to ten times longer to encode when using today’s processor. Thankfully devices will need only a two to three times increase in CPU strength to decode the video.
The new H.265 standard ushers in multiple levels of cost savings. At a storage level, costs saving of 40% would be significant for video libraries hosted in any cloud. Content hosting facilities or CDNs (content delivery networks[25]) are costly endeavor at the moment, for many clients. It may be argued that storage costs are a commodity, but when media libraries are measured in Petabytes[26] then these capital cost savings help the bottom line by using newer and more efficient codecs. Also, bandwidth costs will play an important role in further savings. Many online video platforms charge subscribers for the number of gigabytes leaving their facilities. Halving those costs by using H.265 would have a significant impact on monthly operational costs. On the flip side, video processing costs will increase in the short term, due to stronger and more expensive CPU power needed at both the encoding and decoding stages. Existing hardware will likely be used to encode H.265 in the short term, at the expense of time. But dedicated hardware will be needed for any extensive transcoding exercises, or real-time transcoding services.
Subscription-based internet services significantly compress their video content compared to their Blu-Ray counterparts. It’s a practical trade-off between video quality and bandwidth savings. But the quality of video only becomes a factor on certain consumer devices which can show the deficiencies of a highly compressed video. For example, a 60” (inches diagonal) plasma screen has the resolution to reveal a codec’s compression artifacts, but for a TV less than 40”, these artifacts would be hardly noticeable to the average consumer. For the most part, a 1080p title is barely distinguishable in quality to 720p on even a medium-sized television. Likewise, for many views watching on a majority of mobile device, high resolution content is both overkill and costly.
For those with bandwidth caps, subscribers are charged for all streaming data reaching their smartphone, whether they experience the highest quality video or not. Any video data sent exceeding the capability of a consumer device is a waste of money
At the moment video playback on mobile devices still poses a challenge for high definition. Thanks to multi-core processing on smartphones consumers are on the brink of having enough power to play full HD video, and can even run other processor intensive tasks in the background. Although quad-core[28] processors such as the Cortex A15 from ARM[29] and nVidia’s Tegra 4[30] (also based on the ARM architecture) have the ability to play some 1080p video, they will still struggle to play a wide library of full HD content without requiring some level of transcoding to lower profiles. 2013 is ushering in a wide range of handsets claiming 1080p support from HTC, Huawei, Sony, Samsung, and ZTE[31]. Multicore GPU and CPU running at ultra-low power requirements are asserting mobile devices as a viable platform for 1080p.
In the meantime, the resilience of H.264 and H.265 is in their use of encoding profiles (eg. baseline, main, or high and all associated sub-levels). The use of different profiles ensures that the best quality video experience is delivered within the limitations of the device playing the video. Low profile’s such as baseline require minimal processing power but do not efficiently compress the video. High profile modes are highly efficient and squeeze video file size as small as possible. Thus bandwidth is used efficiently, but requires higher processing power of the end-device to decode the video. Although the latest Apple iOS[32] devices support high profile, most smartphones still use lower profiles to ensure wider device compatibility. In the interim, internet video providers continue to encode titles into multiple profiles to suit a wide range of subscriber devices, accommodate their limitations in decoding capabilities, and maximize each individual viewing experience.
Higher profiles in H.265 will also have an effect on consumer electronics (CE[33]) equipment. Current iterations of these appliances are not equipped to handle the required processing demands of H.265. The next generation Home Theater PC (HTPC[34]), Set Top Box (STB[35]), or Media Player[36], will require upgrades their processing engines to accommodate these next generation codecs. Lab testing is still required to showcase that next generation computer processors will have the ability to decode H.265 at higher bit depth (eg. 10 bit), and resolutions as high as 4K resolutions. Some estimates state that 4K using H.265 will require 80 time more horsepower compared to HD using H.264[45].
To further compensate for the vast differences in mobile coverage, and best-effort internet communications, Over the Top (OTT)[37] providers, and Online Video Providers (OVP)[38] are offering advanced video optimization features such as Adaptive Bitrate Streaming (ABS)[39]. This is a solution to optimize video quality sent in real-time. Protocols such as Apple’s HLS[40], and more recently MPEG-DASH[41] have been developed to provide a universal approach to implementing adaptive bitrates.
The need for Adaptive Bitrate Streaming and related techniques is just a stop-gap requirement. As quality of service improves and bandwidth speeds increase, the need for optimization techniques will diminish. In some regions these techniques may completely disappear. Certainly, during the days of the analog modem, bandwidth was at a premium, so compression techniques and sophisticated error correction methods were used to maximize data throughput while also saving costs for the last-mile[42]. As bandwidth increased, these line adaption features were no longer deemed necessary. Similarly, the need for bandwidth optimization techniques will be diluted in regions where mobile 4G LTE[43] (Long-Term Evolution) will become ubiquitous. Speeds will become so reliable that even the internet’s best-effort[44] will be sufficient to deliver multiple 4K videos, in real time, to any device.
In the days of linear television, broadcasters had a difficult task in understanding their audience. Without a direct broadcasting and feedback mechanism like the Internet, gauging subscriber behavior was slow. Today, online video providers have the ability to conduct a one-to-one conversation with their audience. Viewing habits of consumers will continue to rapidly change in the next ten years. This will require changes in advertising expenditure and tactics.
The evolution from traditional TV viewing to online video has been swift. This has significantly disrupted disc sales such as DVD and Blu-Ray, as well as cable and satellite TV subscriptions. With the newfound ability to consume content anytime, anywhere, and on any device, consumers are re-evaluating their spending habits. In this paper we will discuss these changes in buying behavior, and identify the turning point of these changes.
Transcoding large video libraries is a time consuming and expensive process. Maintaining consistency in video quality helps to ensure that storage costs and bandwidth are used efficiently. It is also important for video administrators to understand the types of devices receiving the video so that subscribers can enjoy an optimal viewing experience. This paper discusses the differences in quality in popular video codecs, including the recently ratified H.265 specification.
IV. Search & Discovery Is a Journey, not a Destination
Television subscribers have come a long way from the days of channel hopping. The arduous days of struggling to find something entertaining to watch are now behind us. As consumers look to the future, the ability to search for related interests and discover new interests is now established as common practice. This paper discusses the challenges that search and discovery engines face in refining their services in order to serve a truly global audience.
V. Multiscreen Solutions for the Digital Generation
Broadcasting, as a whole, is becoming less about big powerful hardware and more about software and services. As these players move to online video services, subscribers will benefit from the breadth of content they will provide to subscribers. As the world’s video content moves online, solution providers will contribute to the success of Internet video deployments. Support for future technologies such as 4K video, advancements in behavioral analytics, and accompanying processing and networking demands will follow. Migration to a multiscreen world requires thought leadership and forward-thinking partnerships to help clients keep pace with the rapid march of technology. This paper explores the challenges that solution providers will face in assisting curators of content to address their subscriber’s needs and changing market demands.
VI. Building a Case for 4K, Ultra High Definition Video
Ultra High Definition technology (UHD), or 4K, is the latest focus in the ecosystem of video consumption. For most consumers this advanced technology is considered out of their reach, if at all necessary. In actual fact, 4K is right around the corner and will be on consumer wish lists by the end of this decade. From movies filmed in 4K, to archive titles scanned in UHD, there is a tremendous library of content waiting to be released. Furthermore, today’s infrastructure is evolving and converging to meet the demands of 4K, including Internet bandwidth speeds, processing power, connectivity standards, and screen resolutions. This paper explores the next generation in video consumption and how 4K will stimulate the entertainment industry.
Social TV brings viewers to content via effective brand management and social networking. Users recommend content as they consume it, consumers actively follow what others are watching, and trends drive viewers to subject matters of related interests. The integration of Facebook, Twitter, Tumblr and other social networks has become a natural part of program creation and the engagement of the viewing community. Social networks create an environment where broadcasters have unlimited power to work with niche groups without geographic limits. The only limitations are those dictated by content owners and their associated content rights, as well as those entrenched in corporate culture who are preventing broadcasters from evolving into a New Media world.
Content Protection is a risk-to-cost balance. At the moment, the cost of piracy is low and the risk is low. There are no silver bullets to solving piracy, but steps can be taken to reduce levels to something more acceptable. It is untrue that everyone who pirates would be unwilling to buy the product legally. It is equally evident that every pirated copy does not represent a lost sale. If the risk is too high and the cost is set correctly, then fewer people will steal content. This paper explores how piracy has evolved over the past decades, and investigates issues surrounding copyright infringement in the entertainment industry.
About the Author
Gabriel Dusil was recently the Chief Marketing & Corporate Strategy Officer at Visual Unity, with a mandate to advance the company’s portfolio into next generation solutions and expand the company’s global presence. Before joining Visual Unity, Gabriel was the VP of Sales & Marketing at Cognitive Security, and Director of Alliances at SecureWorks, responsible for partners in Europe, Middle East, and Africa (EMEA). Previously, Gabriel worked at VeriSign & Motorola in a combination of senior marketing & sales roles. Gabriel obtained a degree in Engineering Physics from McMaster University, in Canada and has advanced knowledge in Online Video Solutions, Cloud Computing, Security as a Service (SaaS), Identity & Access Management (IAM), and Managed Security Services (MSS).
[11] Originally used in Gordian Knot, http://sourceforge.net/projects/gordianknot/, an open source project for encoding videos into DivX and XviD formats. This software is no long being developed.
[13] With multi-pass encoding, the encoder becomes aware that some static parts of the video can be encoded with lower bitrates compared to complex scenes requiring higher bitrates. This knowledge encodes the video more efficiently, but requires higher processing resources and time to complete the task.
[17] This table shows the typical frame size for MPEG2, H.264 and H.265. For consistency and for the sake of comparison, a frame aspect ratio of 16:9 is shown. The Cinemascope[17] frame size of 2.39:1 or 2.35:1 would further alter the figures. The table also does not take into account the audio channel, which roughly amounts to a 10% increase in bitrate and file size (when a similar quality codex are used in each instance). Also not under consideration is a pixel bit depth higher than 8, such as in professional video recording, and common frame rates of 25, 29.97, 30 or 50fps are not considered.
[24] Studies have shown a 39-44% improvement in efficiency over H.264. Joint Collaborative Team on Video Coding (JCT-VC), “Comparison of Compression Performance of HEVC Working Draft 4 with AVC High Profile”