Thursday, January 27, 2011

Rough GUIs and Video DJs

I've been doodling a few ideas for how the GUI may eventually look. Here's a rough one:

The idea is that you (the user) should have a large view of the environment, and be able to navigate freely (the controls will probably be similar to Maya's, which work pretty well for 3D navigation). You should also have small views of all of the smart cameras, as if you're in the control room of a news show (in the above sketch, these show up at the top of the screen. There will probably be a limit to the number of smart cameras you can have at once). Clicking on a camera should open a window with options specific to that camera, so you can give it commands or constraints which will override the default "intelligent" behavior.

I've been thinking a lot about how all of this can actually be exported, so the "virtual documentary" that I keep talking about can be watched later, and not just while the events are happening. One possibility would be for every camera to export its own footage as some sort of video file, so a user can watch any of them by themselves, or edit them together in Final Cut or Premiere or something.

But I really like the idea of editing the movie in real-time. While the individual cameras will be smart enough to cut to different angles, I think the user should be in charge of actually cutting between different cameras.

Basically, the user would be a "Video DJ." There's one camera window, let's call it the "master track," which is the one whose footage is actually exported when the program is closed. The master track is always linked to one camera, and shows the footage that it's capturing—in the GUI diagram above, the master track is the top window all the way to the left, and it's currently linked to camera 1.

By pressing a keyboard button (maybe the number keys would be tied to numbered cameras), the user changes which camera is being recorded on the master track. This way, while each individual camera is capturing a different event, the user can cut between them at will, like a DJ cutting between different records. There could even be different keyboard commands for different transitions, like fade-outs, wipes, dissolves, and so on. Or a user could pick some sort of default setting, like having the master track just alternate between cameras every couple of seconds.

Anyway, all of these recording things are just thoughts. The first version of the GUI will simply allow the user to navigate around and click on events, while a window will display a smart camera's view. DJ'ing will have to wait.

Wednesday, January 26, 2011

Let's Learn Unity

My project will run in Unity, a game engine which you can get here. So far, I've just been playing around with the free version, which has a solid amount of features in it—I may move over to the full-version later, if necessary.

After messing around with it for some time, it seems like Unity is a pretty cool program! They provide the skeleton of a platformer game to help you learn the interface, so I've been using that as a tutorial. Here's what Unity looks like when you open up the platformer demo:

Unity... IN SPACE.

This is the "Scene" view, which shows the entire map. When you hit the play button at the top, you switch to "Game" mode. In my project, the Game mode will actually look kind of similar to the Scene mode, because the user will be able to fly around the map wherever they want, trigger events, and play with their smart cameras.

Speaking of cameras, see that "Camera Preview" window in the bottom right? That shows what the camera (which is currently selected) is seeing. I'm going to start working on getting something similar to work when the project is actually running: while the user is doing their thing, smaller windows should show the views of the smart cameras. The user should also be able to select a camera and get a little window of options, so they can give the camera specific commands.

Programming in Unity involves programming lots and lots of scripts in C#. When I first tried to edit a script, it opened up in this awful editor called Unitron.


Thanks a lot to Nathan for showing me how to open scripts in Unity's MonoDevelop instead. Turns out that in Unity, you can go to Assets->Sync MonoDevelop Project. That way, edits made in MonoDevelop automatically synchronize in Unity.

Too bad "Unitron" sounds like an awesome robot, while "MonoDevelop" is boring.

Now that I've figured out how to edit scripts and apply them to objects in Unity, it's time to make a test environment and set up some simple tests that a user can trigger. It's really simple to import Maya files into Unity—I tested it with a model of a penguin wearing a top hat and monocle, of course.

Penguin... IN SPACE.

Once I build an environment in Maya, I'll work on simple scripts for things like making boxes move when a user either clicks on them or presses a keyboard button. Meanwhile, I'll be trying to make a Camera Preview window actually appear in-game. Exciting stuff!

Tuesday, January 25, 2011

Cinematography Basics: Panning and Tilting and Tracking, Oh My

I've been playing around in Unity to get a feel for how this game engine works, so soon I can get started on building a very simple environment and moving a camera around in it. Later this week I will post about my Unity progress, but I figured it couldn't hurt to do some primers on cinematography! My project is all about virtual camerawork, but I've done a lot of live-action filmmaking, and the same rules and terminology apply. Plus, I love drawing little diagrams, and everybody loves blog posts with pictures.

First up, basic camera moves. Let's break it down into three basic actions: Pans, tilts, and tracks. In general, most complex camera moves are just combinations of these elements.

(Note: In some cases, I've heard certain terms be used differently by different directors, in different filmmaking books, on different websites, and so on. At the very least, these primers will make clear what I mean when I use these phrases, so you'll know what I'm talking about in later blog posts)

A camera pans when it rotates horizontally, either from left to right or from right to left.

The camera rotates vertically, either from up to down or from down to up.

In a tracking shot, the entire camera moves, rather than the direction it's pointing in. I've actually heard of this being divided up into "pedestal shots" (where the camera raises or lowers, as if it's on a tripod), "dolly shots" (where the camera moves forward or backward, as if on dolly tracks), and "trucking shots" (where the camera moves left or right). But for simplicity's sake, I'm going to lump them all into "tracking shots," where the camera physically moves. You can also think of this as a "crane shot," as if this the camera is on a crane which can move it in any direction.

These elements are great enough on their own, but boil 'em together and you get 6 delicious degrees of freedom, allowing your camera to position and orient itself however it wants.

Next time on Cinematography Basics: Zooms, focus, framing, and composition!

Monday, January 24, 2011

Design Document

I know what you're thinking: "That abstract was pretty cool, Dan, but is there any way you could express the same ideas in a much longer and more detailed form, with lots of references to other papers and specific plans for your implementation? And while you're at it, could you include some sort of Gantt chart? Please phrase your response in the form of a design document."

So how did I know exactly what you were thinking? It is both a gift and a curse, and I would rather not talk about it now.

Regardless, my design document can be found below! (If the embedded version doesn't work you can also see it at


Thursday, January 20, 2011

Introduction and Abstract

Welcome to the exciting world of intelligent camera control! This blog will follow the progress of my senior design project, so stay tuned for updates.

Before I get into the nitty-gritty of virtual cinematography and how I plan to implement it, let's start at the very beginning. Here is the abstract of my project:

Users often view virtual worlds through only two cameras: a camera which they control manually, or a very basic automatic camera which follows their character or provides a wide shot of the environment. Yet real cinematography features so much more variety: establishing shots and close-ups, tracking shots and zooms, shot/reverse-shot, bird's eye views and worm's eye views, long-takes and quick cuts, depth of field and rack-focus, and more. For my project, I plan to implement an "intelligent camera" for use in a virtual 3D world using the Unity game engine: As events occur in real-time, this camera will automatically choose shots to depict them, and essentially create a "virtual documentary" of the events as they happen.

The camera will need to position, pan, tilt, zoom, and track as necessary to keep an unobstructed view of events in frame, while obeying traditional standards of cinematography (following the 180-degree rule, avoiding jump-cuts, etc). An artist can also play the role of "director" and give the camera instructions, such as ordering it to place more priority on one event over another, drawing a certain path for the camera to follow, and moving the camera to different vantage points (to mimic a helicopter or crane-shot, for example); when not being given specific orders, the camera will revert to its automatic "intelligent" state. Ultimately, a user should be able to place multiple intelligent cameras in the 3D world, trigger events to occur, and then cut between the cameras to watch a well-shot "documentary" of the events be constructed in real time.

Coming soon: My design documents, which more specifically detail how I plan to go about my project. Also, because this incorporates many techniques from shooting live-action film, I'll explain any terminology and concepts of real-life cinematography that I will be referring to later.