Print 2 Phone

Print 2 Phone started as one of my hobby projects. The idea was to create an application that is able to redirect print jobs from a desktop computer to mobile devices. This way instead of printing a document to a paper, one can print virtually to a mobile device. The first version was released in late 2010 for OS X and iOS, the second version was a starter app in the Mac App Store and was featured by Apple. I created the prototype versions for Windows XP/7 and Android as well, but I couldn't find an easy way to make them purchasable for all versions (Microsoft doesn't have an app store for third party desktop applications and the Android Play Store is available only for free products in Hungary). A few months later I realized that I didn't have time for a commercial application beside my full-time job, so I pulled it from the market. I have plans to put it back as soon as I have enough time to fully support the development.

Technical details: Print 2 Phone hooks into the printing process and converts the documents into PDF format. On OS X it is quite easy, since a PDF service does the exact same thing. On Windows, things are a lot more complicated and I had to create a virtual printer driver. The PDF service/printer driver opens a desktop application that looks for online mobile devices and if the user picks one, the PDF document is sent to it.

OS X Leopard/Snow Leopard - Python for the prototype, Objective-C for the released version
iOS - Objective-C
Windows XP/7 printer driver - C++
Windows XP/7 application - Python for the prototype
Android - Java


Ustream Live Mobilizer

The Ustream Live Mobilizer makes it easy for artists, musicians to have a customized iPhone application that can be put together and be updated from a CMS. The first application that was built with the Live Mobilizer is Hollywood Records' official Miley Cyrus app.


Dual Display

In 2009, after the university I started to work on a garage project. The idea was to create an application that can attach the mobile device's screen as a secondary display to a desktop computer. I built a prototype for Windows XP/OS X Leopard on server side and for the iPhone OS on client side. When I applied for a full-time job I thought I could finish and release this product in my spare time, but I never managed to get there for three main reasons. First, Ustream had some really interesting challanges and in my spare time I started to work on those. Second, with the launch of the iPad the idea became much more obvious and other companies provided excellent solutions I couldn't be able to compete with on my own. As additional, this project has several components run in kernel mode, thus needs very extensive testing and support that goes beyond the scope of a one man project. Third, with the launch of Windows Vista Microsoft introduced a whole new display driver architecture with a limitation for virtual display drivers.

Technical details: A display driver creates a new virtual display on the desktop computer and the content of this screen is being captured, compressed and sent for the mobile device that receives the image stream and displays it.

Windows XP display driver - C
Windows XP application - C++
OS X Leopard display driver - C++
OS X Leopard application - Objective-C
iPhone - Objective-C



Real-time 3D Rendering of Virtual Objects into Image Sequences

Supervisor: Dr. Tamás Nepusz

Our eyes are the primary organs we use for observing the world. In order to simulate the real world in a virtual environment, it is essential to convince our eyes that the illusion is real. This goal is fulfilled in many cases, just think about movies, where it is often difficult to tell which objects in a scene are real and which ones are rendered by computers. Without this, the success of several movies would be questionable. However, the movies are only one of the best-known examples where the real and virtual worlds are mixed and complement each other.

The first step in inserting a computer-generated model into a real scene is to reconstruct the exact properties and movements of the camera used for recording the scene. The first attempts involved cameras moved on a fixed path that was completely known before the recording phase, therefore it was possible to obtain more or less precise estimates of camera positions in every second. However, this approach was rather limited, and the current state-of-the-art solutions are able to estimate camera parameters solely from the recorded image sequence either during post-processing or in real time, allowing one to render virtual objects on practically any kind of video recordings. These systems and this specific field of research is called "augmented reality".

In my thesis, I give a review of the theoretical background of these systems and demonstrate the way they are used in practice by constructing a possible implementation. I deal with all major components of augmented reality systems: I discuss methods for estimating the intrinsic and extrinsic parameters of real cameras and I present a software module that renders computer-generated models onto images recorded by the calibrated camera. This module is also able to deal with the problem of rendering virtual objects that are partially covered by real ones. I also present an application that demonstrates the most important features of the implemented module.

Download (written in Hungarian)







Techniques of argument in the film called A Few Good Men

Lecturer: Dr. János Tanács

This paper has been written in Hungarian. If you can read Hungarian, you may want to switch the language for the brief.

Download