IMPART

   The European Commission funded IMPART (Nov 2012 - Oct 2015) focused on 'big (multimodal) data' problems in the field of the digital cinema production.

   The tools produced have been integrated in the production software of Double Negative, and new products from FilmLight have resulted. Twenty papers in journals and 70 conference publications demonstrate the research carried out. The project has created Open Data for research in this field and Open Source software (for acceleration and 3D web).

   The multimodal data have been unified though a 3D paradigm, leading to tools to speed up in orders of magnitude and improve 3D reconstruction, for assessing quality (of capture environments, of 3D reconstructions, in-focus, …). Video semantic analysis suitable for large scale multi-view has been provided, as well as 3D-2D (mainly web) integrated visualizations.

   This page showcases part of the results.









Data acquisition and registration

   On-set capture for media production is challenging due to moving background, uncontrolled illumination and limited system support. It requires aligned background scene information as well as dynamic actions in the main capture volume, on-set system monitoring and assessment tools against unsecured capture environments, and accurate composition of footage from various capture devices.


Multi-modal data registration

   2D and 3D data footage acquired from various sensors are registered to a unified 3D space for efficient multisource data management. 3D data from active sensors is directly registered to the reference coordinates through 3D feature detection and matching. 2D footage is registered via 3D reconstruction such as stereo matching or structure-from-motion techniques. Details of the pipeline and algorithms can be found in the following papers:




H. Kim and A. Hilton, “Evaluation of 3D Feature Descriptors for Multi-modal Data Registration,” Proc. 3DV, June 2013

H. Kim, S. Pabst, J. Sneddon, T. Waine, J. Clifford and A. Hilton, “Multi-modal big-data management for film production”, Proc. ICIP. Sep. 2015



Quality assurance tools

   “If in doubt, reshoot” is an attitude born out of the fact that media production generates just too much data for manual review. Within IMPART, University of Surrey developed a suite of tools for on-set quality assurance, monitoring and decision support tools, offering the following capabilities:

  • Prior to capture, assuring that the cameras capture what the director has in mind.
  • During the capture, monitoring that the capture equipment operates correctly.
  • After the capture, validating the data, and if possible, correcting the existing issues without a reshoot.
   Our work tackles the big data problem in media production by “preventing a multiplication of data beyond necessity”. The specific capabilities include synchronisation and coverage assessment for multicamera networks, along with validation of calibration parameters. These are enabled through an ensemble of state-of-the-art techniques in computer vision, specifically for feature tracking and matching, and robust geometry estimation.



E. Imre and A. Hilton, “Coverage evaluation of camera networks for facilitating big-data management in film production,” Proc. ICIP, Sep 2015.





Acceleration of 3D reconstruction / Real time quality assessment based on marginal covariance computation

3D reconstruction quality assessment

   At Brno University of Technology, we developed algorithms for quality assessment of 3D reconstruction of scenes from stills using the Bundle Adjustment (BA) algorithm, which takes advantage of novel data structures and accelerated algorithms developed in the IMPART project. A similar algorithm was also developed for quality assessment of LIDAR scans, which is a popular method of capturing 3D information in the digital cinema industry. Thanks to those algorithms, the capture crews can now get a feedback on set, which was not previously possible.




Ila, V.; Polok, L.; Solony, M.; Smrz, P.; Zemcik, P., "Fast covariance recovery in incremental nonlinear least square solvers," in Robotics and Automation (ICRA), 2015 IEEE International Conference on, pp.4636-4643, 26-30 May 2015



In-focus area detection

   At Brno University of Technology, we developed algorithms for in-focus detection and 2D image technical quality assessment. It is different from the automatic focus algorithms employed in today's digital cameras, which work by comparing high frequency content of an image of the same scene, given lens setting. Our novel algorithm requires no reference and can give absolute focus estimate for individual pixels of an image and also for the whole images. This gives our users the power through metadata, as this information can be easily indexed by the FLUX system. As in Woody Allen's classic The Hollywood Ending, now even a blind man can direct a movie.




Polok, Lukas; Klicnar, Lukas; Beran, Vitezslav; Smrz, Pavel; Zemcik, Pavel, "Quality assurance in large collections of video sequences," in Image Processing (ICIP), 2015 IEEE International Conference on, pp.3580-3584, 27-30 Sept. 2015




Semantic Video Analysis

Video content analysis and description

   At AUTH, we have developed algorithms that analyze the footage from multiple cameras, in order to extract semantic information. We have improved the state-of-the-art in human-centered video analysis, such as neural network-based and fast (approximate) classification methods. IMPART solutions include algorithms and tools for human activity recognition, face recognition and shot type characterization. The extracted semantic information is stored in AVDP Light XML format.



A.Iosifidis, A.Tefas and I.Pitas, "Graph Embedded Extreme Learning Machine," IEEE Transactions on Cybernetics, 2015

A.Iosifidis, A.Tefas and I.Pitas, "Distance-based Human Action Recognition using optimized class representations," Neurocomputing, vol. 161, pp. 47-55, 2015



Semantic content summarization

   At AUTH, we have also developed algorithms which can perform temporal video segmentation of each take, based on activity information. We have developed software that divides the videos in segments and stores the semantic information in an XML format. It permits the selection and browsing of specific segments of the camera recordings according to actions and actors, which leads to determination of similar scenes, groups of activities and groups of actors, or other summarizations for later use in post-production.




N.Tsapanos, A.Tefas, N.Nikolaidis and I.Pitas, "A Distributed Framework for Trimmed Kernel k-means Clustering," Pattern Recognition, vol. 48, pp. 2685–2698, 2015.

N.Kourous, A.Iosifidis, A.Tefas and I.Pitas, "Video Characterization based on Activity Clustering," in International Conference on Electrical and Computer Engineering (ICECE), Dhaka, Bangladesh, 2014




Interactive 3D Web

   Universitat Pompeu Fabra explored visualization and annotation concepts through ground work on web-based multi-platform tools, through 3D visualization of the structure of the dataset, and through simulation tools for dailies.



Progressive Point Cloud

   We focused on the development of a prototype system for remote visualization of data recorded on-set, such as the LIDAR scan of the environment and the camera footage. The goal is that this visualisation could be integrated with content analysis and camera coverage work in order to have a holistic view of all the data recorded on-set. By clicking in the following image or link you can check the web demo.



A.Evans, J.Agenjo, J.Blat. "Web-based Visualisation of On-set Pointcloud Data". 11th European Conference on Visual Media Production (CVMP2014), London, England (November 2014).

A.Evans, J. Agenjo, J.Blat "Combined 2D and 3D Visualisation of On-set Big Media Data". In proceedings of the IEEE International Conference on Image Processing (ICIP 2015) , Quebec City, Canada (September 2015)





Integrated tools

   Many solutions were developed during the project. In order to test and prove the usability of these solutions, they were integrated in Double Negative software.

IMPART tools in Double Negative's Jigsaw package

   Jigsaw, a proprietary software package developed by Double Negative, allows users to efficiently manage and process various data from digital photographs to 3D point clouds. IMPART technology has been fully integrated into Jigsaw and greatly enhances its capabilities.



   Point cloud data from various sources (LIDAR, stereo Spheron imagery, photos via photogrammetry) can get registered into a common coordinate frame using tools from the University of Surrey. Surrey has also contributed a robust video stream alignment method that helps with the processing and conforming of witness camera data, an otherwise very time-consuming and manual task.

   These algorithms have been significantly accelerated through contributions from BUT. Additionally, Brno University has also supplied a fast algorithm that visualises the estimated quality of point cloud registration, which is very useful during onset data acquisition and can run on a laptop.

   AUTH has contributed semantic video indexing and search technology, which is useful for artists in animation departments that often need to find very specific reference footage (e.g. walk cycles).

   Last but not least, UPF has contributed fast and efficient methods to compress, stream and display meshes, point cloud data and registered videos in 3D to a normal web browser. This technology is very useful for the communication over limited bandwidth channels, something that happens regularly during data capture onset.




Kim, Hansung; Pabst, Simon; Sneddon, Justin; Waine, Ted; Clifford, Jeff; Hilton, Adrian, "Multi-modal big-data management for film production," in Image Processing (ICIP), 2015 IEEE International Conference on , Sept. 2015



Integrated visualization

   At UPF, we used the technologies of the web to display point cloud data and registred videos in the web browser. We developed progressive visualizations, mesh compression techniques and other methods for a fast visualization. The unified 3D scene can be easly shared via the web. You can check out several scenarios here.




A.Evans, J. Agenjo, J.Blat. "Hybrid Visualization of Digital Production Big Data". ACM Web3D 2015, Heraklion, Crete (June 2015).




Industrial tools

   The data recorded on-set typically measure in the Terabytes and, thus, can be stored in multiple physical locations. FilmLight’s research within IMPART intended to create a homogenous view of all data recorded on-set, regardless of its physical location, thus creating a heterogeneous “cloud” of file systems. FilmLight was addressing big data issues through three approaches: Generate less data, by contextual monitoring with FLIP On-set processor appliance; Prune useless data early, by contextual review with Baselight Dailies software; and Manage data efficiently through post, by metadata driven file manipulation with FLUX Manage software in conjunction with the FLUX+ indexing system.

   The tools developed by FilmLight have been showcased in several events around the world, including Dimension 3 in Paris, June '13; IBC in Amsterdam, Sept. '13 and '14; NAB in Las Vegas, April '14 and '15; and Cinegerard in LA, June 2015.



FLIP

   It is a hardware device that for on-set preview of live camera output with real-time application of looks. FLIP takes away the guesswork of digital cinematography, and enables the first thoughts of the DoP or the director to become the foundation for the final grade.



Daylight & Flux+

   Daylight is a powerful dailies platform for shot management and high-performance transcoding. It is designed as a compact yet powerful grading decision tool to help DoPs and directors establish looks and visualise what they have shot, on set or on location, as well as meeting all of the sophisticated deliverables requirements—in one application.

   FLUX comprises a post-production server and a management tool, which, when combined, provide a revolutionary new way to store image assets and build an image factory.








Open Source and Open Data

   Since the middle of the first year of the project, IMPART partners Brno University of Technology and Universitat Pompeu Fabra made available initial software packages as Open Source software from university websites, from SourceForge, or GitHub.

Research dataset

   In October 2014 the IMPART project made publicly available the dataset for research into multimodal movie production created by the University of Surrey and Double Negative. It consists of about 20TB of 2D and 3D data and metadata captured during sessions at different locations at the University of Surrey and Double Negative premises in both indoor and outdoor environments. The dataset can be found here.



SLAM++

   To support parallelisation and speed-up in an efficient way, Brno University of Technology developed (enhanced) a high-performance nonlinear least squares solver for graph problems, SLAM++, which outperforms existing implementations for large 3D reconstruction datasets, among other qualities. The software pacakge already has more than 16000 downloads and can be found here.



WebGLStudio

   WebGLStudio is a set of web graphics libraries. The main application is a platform to create interactive 3D scenes directly from the browser. It allows to edit the scene visually, code your behaviours, edit the shaders, and all directly from within the app. The libraries have over 1400 stars in GitHub and can be found here.





Publications





Credits

   IMPART stands for Intelligent Management Platform for Advanced Real-Time media processes. It was a European Commission funded project which started in November 2012 and finished in November 2015.



   Its overall aim was to research, develop and evaluate information management solutions for 'big data' problems in the field of the digital cinema production. It has developed new ways of managing, visualising and analysing very large multimodal data sets so that creative personnel can review three-dimensional scene representations on the set, understand the data, identify errors, evaluate the quality of the shot and take creative decisions in real-time.

Its partners were:

Universitat Pompeu Fabra UPF

Aristotle University of Thessaloniki AUTH

Brno University of Technology BUT

University of Surrey UniS

Double Negative Visual Effects DNeg

FilmLight


Coordinator: Josep Blat (josep.blat@upf.edu)