LEADER PRESS RELEASE
SEE LEADER ON STAND J08 (BOXER SYSTEMS) AT BVE 2018, LONDON EXCEL, FEBRUARY 27 - MARCH 1
Leader adds ITU-R BT.2408-0 support to LV5490, simplifying HDR production
Accompanying image shows the Leader LV5490 with 4K/HDR customised display.
London, UK, January 23, 2018 – Leader Electronics has chosen BVE 2018 as the UK launch platform for a major addition to the feature set of its LV5490 4K high dynamic range waveform monitor. The instrument now provides full direct support for ITU-R BT.2408-0 'Operational practices in HDR television production'. Published in October 2017, the ITU document provides initial guidance to help ensure optimal and consistent use of the Hybrid Log-Gamma and Perceptual Quantizer techniques specified in ITU-R BT.2100.
The LV5490 4K/HDR waveform monitor supports preconfigured settings to ensure the reference levels are correct for both PQ and HLG production. A 75% HLG or 58% PQ marker is also displayed automatically on the waveform monitor graticule. This represents the reference level, enabling the vision engineer to ensure that any object placed at the centre of interest within a scene occupies the appropriate signal range and that sufficient headroom is reserved for specular highlights. Also now added to the LV5490 is system gamma Optical-Optical Transfer Function functionality for HLG and Sony's SR-Live for HDR production technology.
Leader's LV5490 offers 4K, UHD, 3G, HD and SD test and measurement features in a half-rack-width by 4U portable unit with a full high definition 9 inch front-panel monitor. It provides all the capabilities needed to implement the full potential of high dynamic range in both HD and UHD. Signal displays such as video waveform, chroma levels, colour vectors, bar-graphs, noise, video patterns, quad-3G phase, data tables, camera picture output, colour chart, multichannel audio levels and surround-sound vectors can be viewed simultaneously in a user-customisable layout. If a specific element requires detailed attention, this can be selected quickly for viewing at higher resolution or full-screen. The Leader CINELITE HDR toolset also comes as a standard feature of the LV5490, allowing easy assessment of relative exposure and overall luminance during production. A focus-assist option allows highly accurate on-set adjustment of camera focus to match the ability of 4K and UHD formats to handle very precise image detail.
Additionally on show at BVE 2018 will be the LV5333 portable 3G waveform monitor. Designed for HD and SD signal monitoring, this now supports five methods of displaying log-based gamma curves: Hybrid Log Gamma, Dolby PQ, Arri Log-C, Canon C-Log and Sony S-Log3.
Hybrid Log Gamma (ITU.BT2100): Jointly developed by the BBC and NHK, HLG is an open royalty-free approach which specifies the system parameters essential for extended image dynamic range television including system colorimetry, signal format and digital representation. The LV5333 comes preconfigured with HLG presets of 50% and 75% reference. This reference can be adjusted to satisfy productions requirements.
Dolby PQ (SMPTE ST 2084): Developed by Dolby PQ is a license-based proprietary standard. It uses SMPTE 2084 EOTF and can go as high as 10,000 nits peak brightness. The LV5333 comes preconfigured with PQ presets of 1,000, 4,000 and 10,000 nits reference. This reference can be adjusted to satisfy productions requirements.
Log-C: Used by Arri when encoding images on its cameras. Log-C curve is a set of curves for different exposure indicies. The LV5333 supports exposure indices of 200, 400, 800 and 1,600.
C-Log: Used by Canon when encoding images on its cameras.
S-Log3: Developed by Sony this being introduced across the company’s range of broadcast and professional cameras and monitors.
Compatible with over 20 HD-SDI/SD-SDI formats, the Leader LV5333 is designed for deployment in studios or technical areas, or attached to a camera tripod. The integral 6.5 inch XGA TFT LCD screen can also be used to display video signal waveform, vectorscope or the video image. CINELITE, CINEZONE, histogram, gamma display, gamut and level error display functions are included as standard. Additional features of the LV5333 include cable length display, external timing display and field frequency deviation display. SDI-embedded audio can be extracted and two user-selectable audio channels sent as an AES/EBU stream to a BNC output. The levels of up to eight audio channels can be checked using on-screen bar displays. User-configurable multi-display combinations within the LV5333 allow easy inspection of signal parameters. Full-screen displays can alternatively be selected to allow detailed review of specific elements.
At a recent, fascinating, and important seminar in London organised by Ooyala in conjunction with Microsoft Azure entitled “AI Meets Media”, the packed room grappled not so much with defining artificial intelligence (AI) versus machine learning (depending on who you ask) but the central core of what enables AI Cognitive Services to be useful in the video world.
Metadata is everything.
And we’re not just talking about data that is gathered, stored, and lying around gathering virtual dust, but data that can me made useful by bringing it to life in multiple ways that are meaningful to each organisation that deploys technologies like the Ooyala Flex media logistics platform, which runs on Microsoft Azure.
Mark Rawlings-Smith, Ooyala Head of Partnership Marketing, opened proceedings by introducing Ooyala - a leading provider of software and services that simplify the complexity of producing, streaming, and monetising video - and acknowledged their expanding partnership with Microsoft, specifically the Microsoft Azure platform, which is the preferred infrastructure platform for Ooyala software and services.
Ooyala Director of Sales Enablement Phil Eade ably guided the group through issues and parameters surrounding AI, video indexing, cognitive services and other areas where investments to achieve pre-defined goals might be best applied, but the end-game for all was to explore the use of metadata to drive processes and technologies that will shape the future of television in ways that will enable broadcast and production companies to maximise their return on investment– essentially by knowing what they have, where it is, and how to monetise it.
And this is no easy task. The world is awash with content – legacy and existing - and a relentless tsunami of new material is also being created every second of every day for news, entertainment, drama, events, specialist interests, or someone’s kitten - who just did the cutest thing ever.
Following an illuminating overview and some very revealing snap polls of those in attendance (all will be revealed by Ooyala in several months’ time) Boxer Systems’ CTO Marc Risby participated in a panel discussion alongside Craig Chuter from the Microsoft Azure Media Industry Blackbelt Team – EMEA, Chris Cheney, CTO and Co-founder, MPP Global, and Adam Jakubowski, Senior Account Executive, Ooyala.
The first question posed was to help define the differences between AI, machine-learning and other terms.
Risby said, “Irrespective of what you want to call it - AI or machine learning - the bottom line is that it is a deeply complex science. It can be difficult to explain and, in a way, what you call it is the least important thing. Concentrating on the practical issues, we know that to deal with the volumes of media we have in a meaningful way, it’s imperative that new technologies that can automatically create metadata to describe almost anything are developed and deployed throughout an organisation. When done well it eliminates a lot of tedious tasks and multiple complexities.”
Risby went on to say that Boxer customers, rather than being bothered about what AI is or what it’s called, are far more interested in applications of AI technology, i.e., can processes be made to work better, cheaper, or faster? Many have been burned in the past by technologies that simply didn’t work.
Risby added, “The introduction of automated metadata generation and extraction technologies is not a new science. Over the past 20 years there have been numerous false starts, so promises of newer and better technologies in recent years are met with a greater degree of, ‘Oh yeah? Prove it”. The difference today is that big-data and cloud services combined with AI technologies mean that the pool of data to improve machine learning and the ability to learn from feedback goes across not just thousands of hours of content, but millions, and, soon, billions. The datasets are so much bigger than in the past.
“We’re now seeing, for example, AI technologies like speech-to-text that are achieving more accurate results than human transcription, and for a fraction of the price. That’s one very small example, but it’s what broadcast and production people want. The question of accuracy is key. There is still scepticism about how well AI will work and the veracity of the data that comes from it, but it’s getting closer, with some estimating up to 90% accuracy, which for many applications is good enough.”
Risby went on to explain that if you extrapolate that ability to higher-end tasks like curating and creating content - i.e., serving you with something the AI thinks you want to see rather than you having to constantly seek it out - everyone wants to improve that ability and make the results more relevant. It’s getting better all the time, which is down to massive increases in the amount of metadata that accompanies the content and how it’s handled.
While it may be somewhat disruptive in the near-term as manufacturers and developers get to grips with what it can and cannot do, it is readily apparent that AI has the potential to be hugely beneficial. For that, end users and those who stand to benefit along the way need to see something that’s real- world, and demonstrably working. Many, at least those organisations large enough to invest heavily in it, are already reaping rewards, but technology needs to exist that will affordably migrate those benefits to smaller organisations and facilities.
To that end, the consensus at AI Meets Media was that useful metadata generation that bypasses tedious manual intervention and, most importantly, applying AI to applications that can use the information in a truly meaningful way, is vital, and edging ever closer to reality.
For example, this article was artificially written by a bot, using loads of metadata, with help from a cute kitten.
VMC 1 (3RU with redundant power)
2-Stripe IP Series Control Panel
VMC 1 (3RU with redundant power)
4-Stripe IP Series Control Panel
Select formats may require purchase of optional I/O hardware.
Requires trade-in of a TriCaster 800 series (800, 850, 855, 860) plus control surface or TriCaster 8000 model and control surface.