Tag: second screen

OTT & Multiscreen • Digital Video Series • 4 • Search and Discovery Is a Journey, not a Destination

Graphic - Search & Discovery is a Journey (title)

Searching for content has significantly evolved in the past ten years, thanks largely to Google[1]. Consumers don’t even realize how much things have changed, and how fast we can find what we’re looking for. Those that are old enough to remember the 80’s TV experience or earlier, discovering new content was reliant on commercial previews to entice us to watch up-and-coming programs. The popularity of TV Guide[2] helped untether viewers from these teasers, and allowed searching for future programming schedules in a magazine format.

Graphic - Search & Discovery is a Journey (i.a. Why should I memorize something)

Regardless, tuning into broadcast TV was restricted to watching specific channels, during specific timeslots. Viewers needed to reserve that window in their daily schedule. Prime time[4] (between 8pm and 10pm) was established as the most lucrative timeslot in a channel’s 24 hour transmission. Broadcasters faced the constant challenge of juggling their content to optimal times, to suit the target audience – a practice that continues today.

The electronic programming guide (EPG[5]) gained traction throughout the 90’s and became a modern attempt to discover new interests. Instead of browsing a week’s programing in paper format, subscribers were doing so on the lower third[6] of their TV monitors. Although the notion of searching was still out of reach – especially in the context that consumers are familiar with today. EPG did not meet the depth of search sophistication that is expected in today’s digital generation. It also lacks modern search features that subscribers expect, such as content suggestions or peer-based recommendations.

Figure i - From Channel Clicking to “Googling”

Figure i – From Channel Clicking to “Googling”

The linear experience in TV broadcasting can be directly compared to how the PC experience entered the lives of consumers. When computers first came into the home, PC’s were arranged similar to television programs – in directories (akin to TV channels),and files (TV episodes) – albeit with greater flexibility and deeper tree structures.

Google helped fix that problem by significantly improving the search paradigm (hoping to remind us of how effective their algorithms are, Google always displays the duration of your search. Who isn’t impressed with 1.5 million hits in 0.2 seconds?) This approach led to software that would achieve similar search speed, accuracy, and ease of use on the PC. Users no longer had to remember where they put their files. All that was needed was to remember a word, or phrase that was used inside the document, and to a good level of confidence, the file would be found. The proficiency that web surfer’s achieved from googling[9], translated to the desktop. Gradually there was no longer a need for all those directories. The organization of digital lives changed from endlessly moving files around the computer, to ensuring they had meaningful file names and metadata[10] so that they could be easily searchable. Whether files were in one directory or one hundred, they could be found just as quickly.

Figure ii - Evolution of Video Consumption

Figure ii – Evolution of Video Consumption

Fast forward to today, and files have grown in size, quantity and frequency that are orders of magnitude higher than ten years ago. Content is accessible at any time anywhere, and on any device. Computers now consist of multimedia libraries – images, music, and videos. The need for metadata is even more important for these files because there is no inherent text from which to catalog them. Metadata comes in two forms:

  • Structural – For photos this may be the time-stamp of the photo, exposure, shutter speed, or geolocation where it was taken. For a video or audio file this may include the bitrate, overall duration, and compression codec used.
  • Descriptive – For photos this may include the names of people in the images, and the event being photographed, For music this would include the artist, song, and album titles. Or for movies this may include the actors, directors, and producers of the title. IMDB[11] (Internet Movie Database) is a good example of descriptive metadata for movie titles.

This embedded metadata forms the index needed to find multimedia files that do not have a textual base.

Search & discovery is now an industry in itself fueling its own revenue streams. Users can now discover interests which were previously inaccessible. Operating systems have followed suit. For example, modern iterations of Microsoft Windows 8[12] and Apple iOS[13] have a relatively flat structure from a user perspective. Programs are now accessible from the home screen or adjacent screens that are a swipe away. Deep directories and file structures are still there, but hidden behind an elegant front-end. Digging deep into our memories and remembering where we put something is now archaic.

Figure iii – The Challenges of Search & Discovery

Figure iii – The Challenges of Search & Discovery

To better understand the evolution of search, and the consumption of video we need to start with an understanding of user behavior. Subscriber needs have evolved to a more complex set of search parameters. These engines now juggle a dense set of algorithms to present results that appease the consumer’s behavior. To do this, multiple algorithms are at play:

  • Collaborative Behaviour[14] compares the subscriber’s past behaviour with other subscribers with similar activities, and clusters similar interests. In other words, users that liked Lord of the Rings[15] also liked Batman, the Dark Knight [16].
  • Content Based search ties related content together. If a user likes the director, Peter Jackson[17], then they may like the movie, King Kong[18], which he directed.
  • Recommendation engine[19] which takes behaviour to a more proactive model by asking subscribers for their opinion, then presenting their collective user ratings.
  • Statistical searches display cumulative totals, such as the number of views, Like’s or the amount of reviewers – suggesting a level of popularity for that content.
Figure iv - Collaborative Search Engine clusters

Figure iv – Collaborative Search Engine clusters

Due to the global nature of a video subscription service, a subscriber should be presented with search results that are relevant to their geolocation[21]. They also should not be bombarded with irrelevant or inaccurate results that cannot be monetized due to restrictions in rights management, censorship, or DRM[22] (digital rights management). This applies to any associated advertisements as well. In the same spirit as Google’s “Do You Feel Lucky”, subscribers want search results that are immediately relevant. Challenges facing modern search engines include:

  • Demographics and Culture add their own level of search complexity, as results are altered due to a content rights or censorship. Results may be filtered or not shown at all.
  • Advertisers also want to have their brands shown prominently – Whether it’s on a mobile, tablet, TV, or laptop – and have their products displayed adjusted to content that complements their brand.

This leads to a wider discussion is on the rights to censor results, freedom of speech, and the manipulation of search results to suit big brother[23]. How much leeway should be allowed in order to control search results in the presence of sponsors, political influences, media brands or internet governance?

Online search algorithms continue to face the challenge of increased accuracy. In the pursuit of excellence, Netflix ran an open contest in October 2006, to improve their collaborative filtering algorithm. The competition was awarded in September 2009 for 1 million US$ to BellKor’s Pragmatic Chaos[24] which improved the accuracy of the Netflix algorithm by 10.06%., After thousands of man-hours, the algorithm was never implemented, due mainly to the implementation costs involved, according to Netflix sources[25].

Search engines and their related services may already be considered a mature market but there is still room for improvement. Several online music services allow subscribers to find artists of similar interest through graphical means similar to collaborative engines[26]. Google recently enabled the ability to drag a photo onto their search bar so that similar images could be found[27]. IMDB has mapped 2.3 million titles. The value of monetizing such a database was recognized by Amazon.com, resulting in their acquisition of IMDB in 1998. The beneficiaries of these search and discovery improvements continue to be subscribers.

Graphic - Search & Discovery is a Journey (iv.a. If millions of people watched it, then I guess it must be good)

Future challenges include a continuing correlation of as may data points as available.  Then making sense of the results. How can peer suggestions correlate better with statistics, past viewing, and collaborative results? How can search data correlate better with purchase behavior, and the overall personal profile of the subscriber? How can video consumption map to musical or photographic interests? How can search better integrate with personal verses business interests? Could a discovery engine reach a level of sophistication that offers search suggestion better than a close friend?  Then again, would we want that level of intimacy with a computer algorithm?

This may be a lot of questions to ask at the end of an article, but isn’t that the foundation of search and discovery?

 

Read Additional Articles in this Series

  • I. Consumption is Personal

    In the days of linear television, broadcasters had a difficult task in understanding their audience. Without a direct broadcasting and feedback mechanism like the Internet, gauging subscriber behavior was slow. Today, online video providers have the ability to conduct a one-to-one conversation with their audience. Viewing habits of consumers will continue to rapidly change in the next ten years. This will require changes in advertising expenditure and tactics.

    II. Granularity of Choice

    The evolution from traditional TV viewing to online video has been swift. This has significantly disrupted disc sales such as DVD and Blu-Ray, as well as cable and satellite TV subscriptions. With the newfound ability to consume content anytime, anywhere, and on any device, consumers are re-evaluating their spending habits. In this paper we will discuss these changes in buying behavior, and identify the turning point of these changes.

    III. Benchmarking the H.265 Video Experience

    Transcoding large video libraries is a time consuming and expensive process. Maintaining consistency in video quality helps to ensure that storage costs and bandwidth are used efficiently. It is also important for video administrators to understand the types of devices receiving the video so that subscribers can enjoy an optimal viewing experience. This paper discusses the differences in quality in popular video codecs, including the recently ratified H.265 specification.

    IV. Search & Discovery Is a Journey, not a Destination

    Television subscribers have come a long way from the days of channel hopping. The arduous days of struggling to find something entertaining to watch are now behind us. As consumers look to the future, the ability to search for related interests and discover new interests is now established as common practice. This paper discusses the challenges that search and discovery engines face in refining their services in order to serve a truly global audience.

    V. Multiscreen Solutions for the Digital Generation

    Broadcasting, as a whole, is becoming less about big powerful hardware and more about software and services. As these players move to online video services, subscribers will benefit from the breadth of content they will provide to subscribers. As the world’s video content moves online, solution providers will contribute to the success of Internet video deployments. Support for future technologies such as 4K video, advancements in behavioral analytics, and accompanying processing and networking demands will follow. Migration to a multiscreen world requires thought leadership and forward-thinking partnerships to help clients keep pace with the rapid march of technology. This paper explores the challenges that solution providers will face in assisting curators of content to address their subscriber’s needs and changing market demands.

    VI. Building a Case for 4K, Ultra High Definition Video

    Ultra High Definition technology (UHD), or 4K, is the latest focus in the ecosystem of video consumption. For most consumers this advanced technology is considered out of their reach, if at all necessary. In actual fact, 4K is right around the corner and will be on consumer wish lists by the end of this decade. From movies filmed in 4K, to archive titles scanned in UHD, there is a tremendous library of content waiting to be released. Furthermore, today’s infrastructure is evolving and converging to meet the demands of 4K, including Internet bandwidth speeds, processing power, connectivity standards, and screen resolutions. This paper explores the next generation in video consumption and how 4K will stimulate the entertainment industry.

    VII. Are You Ready For Social TV?

    Social TV brings viewers to content via effective brand management and social networking. Users recommend content as they consume it, consumers actively follow what others are watching, and trends drive viewers to subject matters of related interests. The integration of Facebook, Twitter, Tumblr and other social networks has become a natural part of program creation and the engagement of the viewing community. Social networks create an environment where broadcasters have unlimited power to work with niche groups without geographic limits. The only limitations are those dictated by content owners and their associated content rights, as well as those entrenched in corporate culture who are preventing broadcasters from evolving into a New Media world.

    VIII. Turning Piratez into Consumers

    IX. Turning Piratez into Consumers, I

    IX. Turning Piratez into Consumers, II

    X. Turning Piratez into Consumers, III

    XI. Turning Piratez into Consumers, IV

    XII. Turning Piratez into Consumers, V

Content Protection is a risk-to-cost balance. At the moment, the cost of piracy is low and the risk is low. There are no silver bullets to solving piracy, but steps can be taken to reduce levels to something more acceptable. It is untrue that everyone who pirates would be unwilling to buy the product legally. It is equally evident that every pirated copy does not represent a lost sale. If the risk is too high and the cost is set correctly, then fewer people will steal content. This paper explores how piracy has evolved over the past decades, and investigates issues surrounding copyright infringement in the entertainment industry.

About the Author

Home - Signature, Gabriel Dusil ('12, shadow, teal)Gabriel Dusil was recently the Chief Marketing & Corporate Strategy Officer at Visual Unity, with a mandate to advance the company’s portfolio into next generation solutions and expand the company’s global presence. Before joining Visual Unity, Gabriel was the VP of Sales & Marketing at Cognitive Security, and Director of Alliances at SecureWorks, responsible for partners in Europe, Middle East, and Africa (EMEA). Previously, Gabriel worked at VeriSign & Motorola in a combination of senior marketing & sales roles. Gabriel obtained a degree in Engineering Physics from McMaster University, in Canada and has advanced knowledge in Online Video Solutions, Cloud Computing, Security as a Service (SaaS), Identity & Access Management (IAM), and Managed Security Services (MSS).

All Rights Reserved

©2013, All information in this document is the sole ownership of the author. This document and any of its parts should not be copied, stored in the document system or transferred in any way including, but not limited to electronic, mechanical, photographs, or any other record, or otherwise published or provided to the third party without previous express written consent of the author. Certain terms used in this document could be registered trademarks or business trademarks, which are in sole ownership of its owners.

Tags

Connected TV, Digital Video, Gabriel Dusil, Internet Video, Linear Broadcast, Linear TV, Multi-screen, Multiscreen, New Media, Online Video Platform, OTT, Over the Top Content, OVP, Search & Discovery, Search and Discovery, second screen, Smart TV, Social TV, Visual Unity

References


[3] “Why should I memorize something when I know where to find it?”, Albert Einstein

[5][5] Electronic program guide, Wikipedia, http://en.wikipedia.org/wiki/EPG

[7] Folder (computing), Wikipedia, http://en.wikipedia.org/wiki/Folder_(computing)

[9] Google (verb), Wikipedia, http://en.wikipedia.org/wiki/Googling

[11] Internet Movie Database, Wikipedia, http://en.wikipedia.org/wiki/IMDB

[12] Microsoft Windows 8, Wikipedia, http://en.wikipedia.org/wiki/Microsoft_Windows_8

[14] Collaborative search engine, Wikipedia, http://en.wikipedia.org/wiki/Collaborative_search

[15] The Lord of the Rings: The Fellowship of the Ring, New Line Cinema, ©2001, http://www.imdb.com/title/tt0120737/?ref_=fn_al_tt_1

[16] Batman, the Dark Knight, Warner Bros., ©2002, http://www.imdb.com/company/co0026840/

[18] King Kong, Universal Pictures, , ©2005, http://www.imdb.com/title/tt0360717/

[19] Recommender system, Wikipedia, http://en.wikipedia.org/wiki/Recommender_system

[20] “If millions of people watched it, then I guess it must be good?”

[25] “Remember Netflix’s $1m algorithm contest? Well, here’s why it didn’t use the winning entry”, by Paul Sawers, 13th April 2012, TNW, The Next Web http://thenextweb.com/media/2012/04/13/remember-netflixs-1m-algorithm-contest-well-heres-why-it-didnt-use-the-winning-entry/

OTT & Multiscreen • Digital Video Series • 3 • Benchmarking the H.265 Video Experience

Graphic - Benchmarking the H.265 Video Experience (title)
Creating a compelling and engaging video experience has been an ongoing mission for content owners and distributors; Whether it was the introduction of CinemaScope[1] in 1953 to stifle the onslaught of color TV[2], or the introduction of 3D films[3] in the 50’s, the 80’s, and its subsequent re-introduction in 2009 with the launch of Avatar[4], to 4K Ultra high definition (UHD[5]) TV, and retina[6] quality video. In every way, gauging video quality has been a subjective exercise for consumers and experts alike.

 Graphic - Benchmarking the Video Experience (i. Calculating Qf)

Figure i - Visual Representation of calculating Qf

Figure i – Visual Representation of calculating Qf

Beyond the signal to noise ratio (SNR[7]) measurement used to compare different compression ratios or codecs, in many cases only a trained eye would notice errors such as compression artifacts[8], screen tearing[9], or telecine judder[10] – unless they were persistent.

A modest metric to assess a video file’s compression density is the Quality factor (Qf[11]). In fact, the name is misleading since it is not actually a measure of quality, but an indication of video compression using three parameters: bitrate, the number of pixels in the frame, and the overall frame-rate of the video. Qf is essentially a measure of, “the amount of data allocated to each pixel in the video” [12]. This metric doesn’t take into account the type of compression profile used, the number of passes originally utilized in the encoding process[13], or any tweaks implemented by the encoding engineer to optimize the video quality. So Qf, or compression density, is just a baseline guide for an administrator that is responsible for transcoding or managing large video libraries.

The accompanying table shows a comparison of Qf using nominal figures for DVD, Blu-Ray and the recently ratified H.265 codec (aka. High Efficiency Video Coding, HEVC[14]). As the compression standard used for encoding the video improves, this corresponds to a reduced Qf.

Although Qf may be considered an inaccurate measure of video compression quality, where it becomes valuable is during the video encoding[15] or transcoding[16] stage – especially when multiple videos are required for processing, and an administrator has the option to choose consistency in the profile used and all related sub-parameters. Choosing a single Qf in this case will ensure global uniformity of compression density across the entire library. There are several internet forum discussions on the optimum quality that should be used for encoding (or an ideal Qf). Realistically, every video has its own unique and optimum settings. Finding this balance for each individual video would be impractical. For this reason, grouping video libraries by genre, or content type, then using a Qf for each group is a more reasonable compromise. For instance, corporate presentations, news casts, medical procedures – basically any type of recording with a lot of static images – could be compressed with the same Qf. The corresponding file for these videos could be as small as 1/20th the size of a typical Blu-Ray movie, with no perceivable loss in video quality.

Table I - Comparing Qf for MPEG2, H.264 & H.265[17]

Table I – Comparing Qf for MPEG2, H.264 & H.265[17]

As shown in the table, the Qf metric is useful in showing that a 1080p movie using the MPEG2 codec (aka. H.262 under the ITU definition) at 16.7GB (Gigabytes[18]) of storage (with a Qf = 0.33), compares equally to 10GB using H.264 (Qf = 0.20). Or in the case of H.265 a file size of 6GB (Qf = .6) again maintains the same quality. This is because each of these codecs significantly improves the efficiency on the previous one, while maintaining the same level of perceived video quality.

Figure ii - Visual representation of Video Compression standards & relative bandwidth requirements[19]

Figure ii – Visual representation of Video Compression standards & relative bandwidth requirements[19]

Ascertaining a video’s compression density can be achieved using MediaInfo[20], an open-source software package. This utility is an excellent resource in determining the formatting and structure of a given video file. MediaInfo displays a plethora of metadata and related details of the media content in a well laid-out overview. This includes granular structure of the audio, video, and subtitles of a movie. The layout of the data can even be customized using HTML and entire directories can be exported as part of a media library workflow. It’s an indispensable resource for content owners and subscribers that are managing large multimedia databases.

Figure iii - Snapshot of MediaInfo showing a video's Structural Metadata

Figure iii – Snapshot of MediaInfo showing a video’s Structural Metadata

The H.264 codec (MPEG 4 AVC[21], or Microsoft’s own VC1[22]) improved on the efficiency of MPEG2[23] codec, developed in 1995, by around 40% to 50%. Although H.264 was created in 1998 it didn’t reach mainstream until Blu-Ray was officially launched in 2006. The H.265 standard, currently promises a similar 35% to 50% improvement in efficiency[24]. So when MPEG2 needs 10Mbps to transmit a video, an H.264 codec could send the same file, in the same quality at 6Mbps. H.265 can achieve the same at 3.6Mbps. The trade-off in using H.265 is two to ten times higher computational power over H.264 for encoding. So expect video encoding to take up to ten times longer to encode when using today’s processor. Thankfully devices will need only a two to three times increase in CPU strength to decode the video.

The new H.265 standard ushers in multiple levels of cost savings. At a storage level, costs saving of 40% would be significant for video libraries hosted in any cloud. Content hosting facilities or CDNs (content delivery networks[25]) are costly endeavor at the moment, for many clients. It may be argued that storage costs are a commodity, but when media libraries are measured in Petabytes[26] then these capital cost savings help the bottom line by using newer and more efficient codecs. Also, bandwidth costs will play an important role in further savings. Many online video platforms charge subscribers for the number of gigabytes leaving their facilities. Halving those costs by using H.265 would have a significant impact on monthly operational costs. On the flip side, video processing costs will increase in the short term, due to stronger and more expensive CPU power needed at both the encoding and decoding stages. Existing hardware will likely be used to encode H.265 in the short term, at the expense of time. But dedicated hardware will be needed for any extensive transcoding exercises, or real-time transcoding services.

Subscription-based internet services significantly compress their video content compared to their Blu-Ray counterparts. It’s a practical trade-off between video quality and bandwidth savings. But the quality of video only becomes a factor on certain consumer devices which can show the deficiencies of a highly compressed video. For example, a 60” (inches diagonal) plasma screen has the resolution to reveal a codec’s compression artifacts, but for a TV less than 40”, these artifacts would be hardly noticeable to the average consumer. For the most part, a 1080p title is barely distinguishable in quality to 720p on even a medium-sized television. Likewise, for many views watching on a majority of mobile device, high resolution content is both overkill and costly.

For those with bandwidth caps, subscribers are charged for all streaming data reaching their smartphone, whether they experience the highest quality video or not. Any video data sent exceeding the capability of a consumer device is a waste of money

Graphic - Benchmarking the Video Experience (iii.a. If H.265 lives up to its hype, then it is destined to be the de facto encoding standard for digital video)

At the moment video playback on mobile devices still poses a challenge for high definition. Thanks to multi-core processing on smartphones consumers are on the brink of having enough power to play full HD video, and can even run other processor intensive tasks in the background. Although quad-core[28] processors such as the Cortex A15 from ARM[29] and nVidia’s Tegra 4[30] (also based on the ARM architecture) have the ability to play some 1080p video, they will still struggle to play a wide library of full HD content without requiring some level of transcoding to lower profiles. 2013 is ushering in a wide range of handsets claiming 1080p support from HTC, Huawei, Sony, Samsung, and ZTE[31]. Multicore GPU and CPU running at ultra-low power requirements are asserting mobile devices as a viable platform for 1080p.

In the meantime, the resilience of H.264 and H.265 is in their use of encoding profiles (eg. baseline, main, or high and all associated sub-levels). The use of different profiles ensures that the best quality video experience is delivered within the limitations of the device playing the video. Low profile’s such as baseline require minimal processing power but do not efficiently compress the video. High profile modes are highly efficient and squeeze video file size as small as possible. Thus bandwidth is used efficiently, but requires higher processing power of the end-device to decode the video. Although the latest Apple iOS[32] devices support high profile, most smartphones still use lower profiles to ensure wider device compatibility. In the interim, internet video providers continue to encode titles into multiple profiles to suit a wide range of subscriber devices, accommodate their limitations in decoding capabilities, and maximize each individual viewing experience.

Higher profiles in H.265 will also have an effect on consumer electronics (CE[33]) equipment. Current iterations of these appliances are not equipped to handle the required processing demands of H.265. The next generation Home Theater PC (HTPC[34]), Set Top Box (STB[35]), or Media Player[36], will require upgrades their processing engines to accommodate these next generation codecs. Lab testing is still required to showcase that next generation computer processors will have the ability to decode H.265 at higher bit depth (eg. 10 bit), and resolutions as high as 4K resolutions. Some estimates state that 4K using H.265 will require 80 time more horsepower compared to HD using H.264[45].

To further compensate for the vast differences in mobile coverage, and best-effort internet communications, Over the Top (OTT)[37] providers, and Online Video Providers (OVP)[38] are offering advanced video optimization features such as Adaptive Bitrate Streaming (ABS)[39]. This is a solution to optimize video quality sent in real-time. Protocols such as Apple’s HLS[40], and more recently MPEG-DASH[41] have been developed to provide a universal approach to implementing adaptive bitrates.

The need for Adaptive Bitrate Streaming and related techniques is just a stop-gap requirement. As quality of service improves and bandwidth speeds increase, the need for optimization techniques will diminish. In some regions these techniques may completely disappear. Certainly, during the days of the analog modem, bandwidth was at a premium, so compression techniques and sophisticated error correction methods were used to maximize data throughput while also saving costs for the last-mile[42]. As bandwidth increased, these line adaption features were no longer deemed necessary. Similarly, the need for bandwidth optimization techniques will be diluted in regions where mobile 4G LTE[43] (Long-Term Evolution) will become ubiquitous. Speeds will become so reliable that even the internet’s best-effort[44] will be sufficient to deliver multiple 4K videos, in real time, to any device.

Read Additional Articles in this Series

  • I. Consumption is Personal

    In the days of linear television, broadcasters had a difficult task in understanding their audience. Without a direct broadcasting and feedback mechanism like the Internet, gauging subscriber behavior was slow. Today, online video providers have the ability to conduct a one-to-one conversation with their audience. Viewing habits of consumers will continue to rapidly change in the next ten years. This will require changes in advertising expenditure and tactics.

    II. Granularity of Choice

    The evolution from traditional TV viewing to online video has been swift. This has significantly disrupted disc sales such as DVD and Blu-Ray, as well as cable and satellite TV subscriptions. With the newfound ability to consume content anytime, anywhere, and on any device, consumers are re-evaluating their spending habits. In this paper we will discuss these changes in buying behavior, and identify the turning point of these changes.

    III. Benchmarking the H.265 Video Experience

    Transcoding large video libraries is a time consuming and expensive process. Maintaining consistency in video quality helps to ensure that storage costs and bandwidth are used efficiently. It is also important for video administrators to understand the types of devices receiving the video so that subscribers can enjoy an optimal viewing experience. This paper discusses the differences in quality in popular video codecs, including the recently ratified H.265 specification.

    IV. Search & Discovery Is a Journey, not a Destination

    Television subscribers have come a long way from the days of channel hopping. The arduous days of struggling to find something entertaining to watch are now behind us. As consumers look to the future, the ability to search for related interests and discover new interests is now established as common practice. This paper discusses the challenges that search and discovery engines face in refining their services in order to serve a truly global audience.

    V. Multiscreen Solutions for the Digital Generation

    Broadcasting, as a whole, is becoming less about big powerful hardware and more about software and services. As these players move to online video services, subscribers will benefit from the breadth of content they will provide to subscribers. As the world’s video content moves online, solution providers will contribute to the success of Internet video deployments. Support for future technologies such as 4K video, advancements in behavioral analytics, and accompanying processing and networking demands will follow. Migration to a multiscreen world requires thought leadership and forward-thinking partnerships to help clients keep pace with the rapid march of technology. This paper explores the challenges that solution providers will face in assisting curators of content to address their subscriber’s needs and changing market demands.

    VI. Building a Case for 4K, Ultra High Definition Video

    Ultra High Definition technology (UHD), or 4K, is the latest focus in the ecosystem of video consumption. For most consumers this advanced technology is considered out of their reach, if at all necessary. In actual fact, 4K is right around the corner and will be on consumer wish lists by the end of this decade. From movies filmed in 4K, to archive titles scanned in UHD, there is a tremendous library of content waiting to be released. Furthermore, today’s infrastructure is evolving and converging to meet the demands of 4K, including Internet bandwidth speeds, processing power, connectivity standards, and screen resolutions. This paper explores the next generation in video consumption and how 4K will stimulate the entertainment industry.

    VII. Are You Ready For Social TV?

    Social TV brings viewers to content via effective brand management and social networking. Users recommend content as they consume it, consumers actively follow what others are watching, and trends drive viewers to subject matters of related interests. The integration of Facebook, Twitter, Tumblr and other social networks has become a natural part of program creation and the engagement of the viewing community. Social networks create an environment where broadcasters have unlimited power to work with niche groups without geographic limits. The only limitations are those dictated by content owners and their associated content rights, as well as those entrenched in corporate culture who are preventing broadcasters from evolving into a New Media world.

    VIII. Turning Piratez into Consumers

    IX. Turning Piratez into Consumers, I

    IX. Turning Piratez into Consumers, II

    X. Turning Piratez into Consumers, III

    XI. Turning Piratez into Consumers, IV

    XII. Turning Piratez into Consumers, V

Content Protection is a risk-to-cost balance. At the moment, the cost of piracy is low and the risk is low. There are no silver bullets to solving piracy, but steps can be taken to reduce levels to something more acceptable. It is untrue that everyone who pirates would be unwilling to buy the product legally. It is equally evident that every pirated copy does not represent a lost sale. If the risk is too high and the cost is set correctly, then fewer people will steal content. This paper explores how piracy has evolved over the past decades, and investigates issues surrounding copyright infringement in the entertainment industry.

About the Author

Home - Signature, Gabriel Dusil ('12, shadow, teal)Gabriel Dusil was recently the Chief Marketing & Corporate Strategy Officer at Visual Unity, with a mandate to advance the company’s portfolio into next generation solutions and expand the company’s global presence. Before joining Visual Unity, Gabriel was the VP of Sales & Marketing at Cognitive Security, and Director of Alliances at SecureWorks, responsible for partners in Europe, Middle East, and Africa (EMEA). Previously, Gabriel worked at VeriSign & Motorola in a combination of senior marketing & sales roles. Gabriel obtained a degree in Engineering Physics from McMaster University, in Canada and has advanced knowledge in Online Video Solutions, Cloud Computing, Security as a Service (SaaS), Identity & Access Management (IAM), and Managed Security Services (MSS).

All Rights Reserved

©2013, All information in this document is the sole ownership of the author. This document and any of its parts should not be copied, stored in the document system or transferred in any way including, but not limited to electronic, mechanical, photographs, or any other record, or otherwise published or provided to the third party without previous express written consent of the author. Certain terms used in this document could be registered trademarks or business trademarks, which are in sole ownership of its owners.

References


[2] Color television, Wikipedia, http://en.wikipedia.org/wiki/Color_TV

[5] Ultra high definition television, Wikipedia, http://en.wikipedia.org/wiki/Ultra_High_Definition_Television

[8] Compression artifact, Wikipedia, http://en.wikipedia.org/wiki/Compression_artifact

[9] Screen tearing, Wikipedia, http://en.wikipedia.org/wiki/Video_tearing

[11] Originally used in Gordian Knot, http://sourceforge.net/projects/gordianknot/, an open source project for encoding videos into DivX and XviD formats. This software is no long being developed.

[12] “The Secret to Encoding High Quality Web Video: Tutorial”, by Jan Ozer, ReelSEO.com, http://www.reelseo.com/secret-encoding-web-video/

[13] With multi-pass encoding, the encoder becomes aware that some static parts of the video can be encoded with lower bitrates compared to complex scenes requiring higher bitrates. This knowledge encodes the video more efficiently, but requires higher processing resources and time to complete the task.

[14] High Efficiency Video Coding, Wikipedia, http://en.wikipedia.org/wiki/H.265

[15] Data compression – Video, Wikipedia, http://en.wikipedia.org/wiki/Video_encoding#Video

[17] This table shows the typical frame size for MPEG2, H.264 and H.265. For consistency and for the sake of comparison, a frame aspect ratio of 16:9 is shown. The Cinemascope[17] frame size of 2.39:1 or 2.35:1 would further alter the figures. The table also does not take into account the audio channel, which roughly amounts to a 10% increase in bitrate and file size (when a similar quality codex are used in each instance). Also not under consideration is a pixel bit depth higher than 8, such as in professional video recording, and common frame rates of 25, 29.97, 30 or 50fps are not considered.

[19] In the context of this article, ½HD is referred to as 1280×720 resolution.

[24] Studies have shown a 39-44% improvement in efficiency over H.264. Joint Collaborative Team on Video Coding (JCT-VC), “Comparison of Compression Performance of HEVC Working Draft 4 with AVC High Profile”

[25] Content Delivery Network, Wikipedia, http://en.wikipedia.org/wiki/Content_Delivery_Network

[27]If H.265 lives up to its hype, then it is destined to be the de facto encoding standard for digital video.”

[28] Multi-core processor, Wikipedia, http://en.wikipedia.org/wiki/Multi-core_processor

[31] “Top 5 smartphones with 1080p displays”, by Jacqueline Seng, cnet, http://asia.cnet.com/top-5-smartphones-with-1080p-displays-62220194.htm

[33] Consumer electronics, Wikipedia, http://en.wikipedia.org/wiki/Consumer_electronics

[34] Home theater PC, Wikipedia, http://en.wikipedia.org/wiki/HTPC

[37] Over the Top content, Wikipedia, http://en.wikipedia.org/wiki/Over-the-top_content

[39] Adaptive bitrate streaming, Wikipedia, http://en.wikipedia.org/wiki/Adaptive_bitrate_streaming

[40] HTTP Live Streaming, Wikipedia, http://en.wikipedia.org/wiki/HTTP_Live_Streaming

[41] Dynamic Adaptive Streaming over HTTP, Wikipedia, http://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP

[44] Best-effort delivery, Wikipedia, http://en.wikipedia.org/wiki/Best-effort

[45] “HEVC Update, Beyond the Main Profile”, Matthew Goldman, Ericsson Television, 26th February, 2013