10th International Workshop on Digital-forensics and Watermarking
IWDW11
Atlantic City, New Jersey, USA
23~26 Octorber 2011

 

 

Home

Call for Papers

Program Commitee

Invited Speakers

Technical Program and PPT Slides

List of Accepted Papers

Venue

Awards

Photo Gallery

Download Full-size Photos

Contact us


 

Jessica Fridrich (SUNY Binghamton, USA)

 

 

Modern Trends in Steganography and Steganalysis



Abstract

Only recently, researchers working in steganography realized how much the assumptions made about the cover source and the availability of information to Alice, Bob, and the Warden influence some of the most fundamental aspects, including the way a steganographic system is built and broken and how much information can be securely embedded in a given object. While for simple artificial sources the problem of embedding messages undetectably has been resolved, it remains vastly open for empirical covers, examples of which are digital media objects, such as digital images, video, and audio. The fact that empirical media is fundamentally incognizable brings serious complications but also gives researchers plenty of opportunities to uncover very interesting and sometimes quite surprising results. An example is the square root law of imperfect steganography that states that the size of secure payload in empirical objects increases only with the square root of the cover size. Since steganographic methods designed to be undetectable with respect to a given model are usually easy to attack by going outside of the model, modern steganography works with complex models of covers in which the embedding distortion is minimized, hoping that it will be difficult for the Warden to work "outside of the model." The problem of cover source model is equally important in steganalysis. However, while working with complex models in steganography is feasible, learning a relationship between cover and stego objects in a high-dimensional model space can be quite challenging due to the rapidly increasing complexity of classifier training, lack of training data, and loss of robustness. In my talk, I will provide a retrospective view of the field, point out some of the recent achievements as well as bottlenecks of future development in source model building and machine learning.

Biography

Jessica Fridrich holds the position of Professor of Electrical andComputer Engineering at Binghamton University (SUNY). She has received her PhD in Systems Science from Binghamton University in 1995 and MS in Applied Mathematics from Czech Technical University in Prague in 1987.

Her main interests are in steganography, steganalysis, digital watermarking, and digital image forensic. Dr. Fridrich's research work has been generously supported by the US Air Force and AFOSR. Since 1995, she received 19 research grants totaling over $7.5mil for projects on data embedding and steganalysis that lead to more than 120 papers and 7 US patents. Dr. Fridrich is a member of IEEE and ACM.

 

Nasir Memon (NYU-Poly, USA)

 

 

Photo Forensics - There is more to a picture than meets the eye



Abstract

Given an image or a video clip can you tell which camera it was taken from? Can you tell if it was manipulated? Given a camera or even a picture, can you find from the Internet all other pictures taken from the same camera? Forensics professionals all over the world are increasingly encountering such questions. Given the ease by which digital images can be created, altered, and manipulated with no obvious traces, digital image forensics has emerged as a research field with important implications for ensuring digital image credibility. This talk will provide an overview of recent developments in the field, focusing on three problems. First, collecting image evidence and reconstructing them from fragments, with or without missing pieces. This involves sophisticated file carving technology. Second, attributing the image to a source, be it a camera, a scanner, or a graphically generated picture. The process entails associating the image with a class of sources with common characteristics (device model) or matching the image to an individual source device, for example a specific camera. Third, attesting to the integrity of image data. This involves image forgery detection to determine whether an image has undergone modification or processing after being initially captured.

Biography

Nasir Memon is a Professor in the computer science department at the Polytechnic Institute of New York University, New York. He is the director of the Information Systems and Internet Security (ISIS) lab at Polytechnic (http://isis.poly.edu).

Prof. Memon's research interests include Digital Forensics, Data Compression, Computer and Network Security and Multimedia Computing and Security. He has published more than 250 articles in journals and conference proceedings and holds multiple patents in image compression and security. He has won several awards including the NSF CAREER award and the Jacobs Excellence in Education award. His research has been featured in NBC nightly news, NY Times, MIT Review, Wired.Com, New Science Magazine etc.

He is currently the Editor-in-Chief of the IEEE Transactions on Information Security and Forensics and an associate editor for the IEEE Security and Privacy Magazine.

Prof. Memon is the co-founder of Digital Assembly (http://www.digital-assembly.com) and Vivic Networks (http://www.vivic.com), two early stage start-ups in NYU-Poly's incubator.

He is a fellow of the IEEE and an IEEE Signal Processing Society distinguished lecturer for the years 2011 and 2012.