Saturday, May 14, 2011

The Grand Finale

I figured that this needed a more climactic final post (though maybe it would be more fitting to have a crazy cliffhanger-ending which leaves the door wide open for a sequel). Anyway, here is my final presentation video, featuring a slightly awkward voiceover by yours truly:

I also never posted my final poster, so here's that too! Note that I made the poster before adding models and textures, so the images just show the old cubes and spheres:

Anyway... that's it! Thanks for watching the progress of my senior project—I hope you enjoyed my result videos, learned from my rules about cinematography, and were somehow able to bear my terrible puns and post titles.

Friday, May 6, 2011

Team Fortress

During my final presentation, someone asked me what I think some of the applications of my project would be. I mentioned animation and simulations, but I also said that it could be cool for video games: while you're playing, intelligent cameras could be recording your gameplay, and you could then edit it all together later.

Yesterday I took a break from writing my final report to play a little Team Fortress 2. Guess what they just added? The Replay Update, which records your matches and then allows you to make movies by editing 3 different camera views together.

They do have achievements for making movies, however. Maybe that's what my project is missing.

Wednesday, April 27, 2011

Roll Hard: The End

Well, tomorrow it's time: The final presentation.

I don't want to bore you with text (like I usually do), so here are two videos instead. First up, some footage showcasing the newly textured scene, along with models imported from Maya. I also added a bunch more shot types for the Follow idiom (most importantly, establishing shots), and there are now crowd scenes. Check it out below:

And finally, perhaps my last video. Consider it a sequel to "Birds of a Feather." Except instead of fighting, Professor Patton von Penguin and Colonel Steven Stork must now join forces... to hop up and down and wave their arms while a camera films them. Anyway, enjoy the automatically generated camera shots, the adjustable height and distance sliders, and the epic fight scene at the end.

Thursday, April 21, 2011

Roll Hard: The Ball Ultimatum

Well, we're in the home stretch now—I designed and printed out my final poster. Hard to believe it's almost over, but let's make the most of the remaining time! We'll start with a list:

First thing: I added much more functionality to the camera menus: I have sliders for shot duration, distance from the target, and height, as well as a few preset buttons. Sliders can be adjusted while the camera is shooting—you'll see in the video below that I slide the distance in and out, causing the camera to fly towards and away the target while still filming it. The Follow button is pretty much the default standard, which causes the camera to track behind the target from a normal distance; Helicopter mode sets distance and height very high, so that the camera will film from far away and above; Bourne mode mimics the "Bourne" trilogy, by setting the camera to cut often, shoot from close angles, and shake like crazy.

Here's some footage (you can pretend that the dialogue is actually a fight scene. Or a dialogue scene taking place during an earthquake):

Second thing: you'll note in the video that the cameras now rotate slowly when they're not engaged. Right now they're ending up tilted, which is disorienting, so I'm going to fix that. But I will probably have them slowly do something when unengaged, just to look more visually appealing.

Third thing: In the corner of the video you can also see a crowd of my little blob actors. I started making a crowd event for the cameras to shoot, but it's not finished yet—you'll see it soon. But basically, I'm planning on leaving the event types as 1 actor events (follow), 2 actor events (dialogues), and 3+ actor events (crowds) for now. This is what it happened to look like in my original camera behavior tree anyway.

Fourth thing: You can't really tell in the video, but I rewrote a bunch of the code for the initial placement of the camera in a shot to ensure that it doesn't collide and keeps the target unoccluded. The major change is that when the target is near a wall and the camera is trying to track behind it, the camera now moves upwards and looks down; as the target moves away from the wall, the camera slides down and moves forward slowly until it can track at normal distance.

Fifth thing: The cameras now subtly look ahead of a target based on the target velocity, instead of at the targets themselves. Remember the rule of letting the actor lead? No? Here's a picture:

It's a small difference, and hard to notice. But if it's not happening, you'll feel uneasy, and you won't be sure why! So now you can rest easy again. However, speaking of rules...

Sixth thing: ...Editing rules are currently being violated. I realized that I never actually posted about those on this blog, so here they are in exciting picture form:

The 180-degree rule is related to the invisible line of action in a scene. Once you've established that the camera is on one side of that line, abruptly cutting across it will be disorienting for viewers. An example, look at that dialogue scene in the above drawing: the characters seem to switch places when the camera cuts. Alternatively, think about a ball rolling from left to right—if the camera cuts across the line, it seems like the ball changes direction. This happens sometimes in my project right now.

The 30-degree rule is kind of the inverse: if the camera doesn't move enough when it cuts, you get a jump-cut. In the drawing above, it would seem like the camera suddenly lurched to the side or stuttered, instead of cutting to a new angle. So when you cut, the difference in angle from camera to target should be greater than 30 degrees. This also applies to cutting closer or further from the target—it should be a large enough jump to seem motivated.

So right now these principles aren't being enforced, but that will be fixed soon. Essentially I'm going to set ranges when the camera cuts, so that it changes enough from the previous angle, but not enough to break 180. Hopefully that will work.

Anyway, time is quickly ticking away. The big list of things to do: crowd events, editing enforcement, and more camera shots for the idioms. After that, I will use remaining time to continue tweaking things. I'm also going to be working on my final presentation pretty soon, so that's exciting as well!

Thursday, April 14, 2011

Bally Roller and the Deathly Menus

Wow, I've run out of Harry Potter titles already... I think I'll have to go back to Roll Hard, but with more subtitles. "Roll Hard: Electric Boogaloo," "Roll Hard: The Quickening," "Roll Hard: Back in the Habit," etc.

Oh right, I made some progress on the project too!

As I said last week, I wanted to get rudimentary menus implemented, so that the user can change the settings of the cameras while they're filming. Figuring out how to deal with GUIs in Unity was a bit of a struggle, but I think I managed to pull it off: You can click on any of the camera windows (I decided that forcing the user to click on the camera itself would be a pain, since they can teleport around instantly) to open up a (color-coded!) pop-up menu. Clicking on other camera windows changes the pop-up menu, and pressing esc removes it.

Right now, the menu allows you to change the shot duration of a camera (essentially how long each shot lasts before it cuts away). Watch the video below, where I shorten the shot duration of the camera filming the dialogue scene, and imagine that they're having an unbelievably exciting conversation to match the frantic pace of the editing. Setting the duration to 0 tells the camera to stop cutting entirely and just stay within its current shot-type (you can see me demonstrate this with the camera in the pachinko machine).

Getting the shot duration to work was more complicated than I was expecting, because it involved editing the behavior tree while the camera was... inside of it. The Wait nodes, specifically. Basically, I kept having issues with the camera getting confused and hopping around its behavior tree (like suddenly skipping from the follow idiom to the dialogue idiom). I think I have that under control now, so it should be easier to make new sliders and buttons on the menu.

Since presentations and posters are on the horizon, I'm going to add a bunch of new shot types and a few events for variety; I also want to continue tweaking the camera movement so that it can track the targets better. I also looked into camera settings like depth of field, blur, graininess, etc... but they're only available in Unity Pro (I guess I've been using Unity Amateur).

I think the SIG Lab has Unity Pro though, so I'd love to experiment with some of those things at some point. Have I mentioned that I want to make a film noir camera filter? Yes? Well I'm mentioning it again, because that would be awesome.

Thursday, April 7, 2011

I've Got To Admit, It's Getting Beta

Almost forgot: Beta reviews were earlier this week!

One of the useful things I got out of it was a suggestion by Norm about how to handle tracking in complex environments. We talked a bit about how the characters in the "Harry Potter" movies are always running through forests (See? I had a reason for titling recent blog posts after those books), and the camera manages to track alongside them while keeping them in view and avoiding other trees. Essentially, think of it like this: instead of just trying to navigate the camera through the obstacles while looking at the target, imagine that there's a line from the camera to the target, and you're really trying to navigate the line through the environment. Since the camera can dolly in and out, the line should be able to get larger or smaller; it should also be able to be broken for a short time, as trees move through the view. Luckily, I have a shortcut: if the target is ever occluded for too long, I can just instantly cut to another angle.

Norm and Joe also told me about the new camera tools for Google SketchUp (Read more about them here!) These are still cameras, so they don't animate like mine. However, they do allow you to preview shots from different angles, and have options for things like focal length. So I definitely plan on checking it out, as it might give me some ideas for my project. We briefly discussed a future goal of turning my project into a plugin for SketchUp... But that's just for the future.

Also: This is probably the best blog post title that I will ever come up with. It's all downhill from here.

Wednesday, April 6, 2011

Bally Roller and the Half-Shot Idiom

Last week, I managed to get basic cutting implemented. I said that my next goal would be to get basic idioms implemented, and decided to try tackling a basic dialogue scene. Let's check out some footage! (Note: Please watch this with another person so that both of you can provide improvised voice-overs for the characters)

And there you have it! Obviously, it's not a very exciting discussion: the camera just switches between two over-the-shoulder shots. But this is really the final piece in the foundational puzzle of my intelligent camera system: now the camera behavior trees handle different types of events with different shot idioms. So as you can see, cameras are reacting to a "follow" event by alternating between tracking shots and panning shots; they react to a "dialogue" event by alternating between over-the-shoulder shots. But now that I know that the behavior tree can handle more than one type of event, and more than one type of shot, it should be simple to add more. Awesome.

I'm going to work on more shots first, because many events will share the same types of shots (and just employ them differently). Currently, initial shot choices are handled by using an invisible dummy: the dummy starts at the position of the target and flies away from it in a certain direction; once the dummy reaches a predetermined distance, or collides with something, it stops. That way, when the camera warps to the position of the dummy, it should start with a clear view of the target (and if the target is within a box, for example, the camera will be inside the box too). For some events, such as the dialogue, the dummy and camera need to worry about multiple targets—in this case, for an "over-the-shoulder A" shot (that's over character A's shoulder, towards character B), the dummy starts at A, and moves backwards and to the side to get B in frame as well. Essentially, every type of shot has different instructions for the dummy, and then different instructions for how the camera should film the target. Soon I'll be adding aerial shots, establishing shots, reverse-tracking shots, close-ups, and so on.

But first, I do want to work a bit on the other part of my project: user constraints. While the program is running, the user should be able to click on a camera and get a little pop-up window allowing them to adjust settings on the fly (such as how often to cut, whether to only stick to certain types of shots, etc). I'm not sure how difficult this kind of thing is to do in Unity, but I want to start figuring that out as soon as possible.

Thursday, March 31, 2011

Bally Roller and the Order of the Edits

The most strained post title ever? Perhaps. I ran out of "Die Hard" movies, so I'm moving over to "Harry Potter." I don't know what I'll do when it comes time to make my eighth video. "Friday the 13th" movies, maybe? Not enough franchises made it past seven entries.

Anyway, I now have a very basic form of cutting implemented! Let's go to the tape:

In this video, I'm first showing just the ball rolling down the simple slopes, with the red camera tracking it; then, I release both the simple ball and the pachinko ball, and allow a camera to follow each one. The green camera is also set to cut more often than the red camera. Pachinko is just more exciting to watch, I guess.

Since I wanted to start with very simple editing, the cameras currently only cut back and forth between two shots: a tracking shot from behind the target, and a stationary shot which pans with the target. Here is a diagram of the behavior tree that the cameras are implementing:

That may be confusing, so I made a simpler version as well, with more words and less cryptic arrows. Also, more bright colors, and harder-to-read text. Let's take a look at that version:

Hopefully that one will help. Basically, the camera has two branches for whether it's currently engaged with an event or not (originally I was going to have the camera wander aimlessly when not engaged, but I disabled that for the video above). When it is engaged, it enters a shot idiom, which is a sequence of commands for handling a certain type of event. In this case, there's only one idiom: cut to a tracking shot behind the target, wait a certain amount of time, cut to a stationary side shot, wait, and repeat (until no longer engaged). Having shot idioms also helps deal with issues like the 180-degree rule (I never made a post about Editing Rules, did I? Don't worry, I will), since we can make sure that the next shot doesn't violate anything with respect to the previous one.

Obviously this is just the foundation for the cutting system, but behavior trees make it easier to build off of this. The next step will be to implement a wider variety of shot choices (establishing shots, reverse tracking shots, aerial shots, etc). After that, I will implement more idioms for handling different types of events (a dialogue scene between two characters is probably the most important one). The shot lengths will be variable eventually too, so that cuts don't happen so regularly.

Anyway, beta reviews are coming up pretty fast. I'm planning on having more shot choices added by then, and I'm going to play around with more type of events. I'm going to try to have multiple idioms as well, but they probably won't be as complex as dialogue or crowd scenes. Still, we'll see. Stay tuned for "Bally Roller and the Half-Shot Idiom" next week!

Thursday, March 24, 2011


Time for a flashback to my original proposal and design document! How have I been doing so far?

I think, as happens often, I underestimated how long some things would take: Specifically, I didn't expect camera placement and tracking to be as complicated. On my Gantt chart, I actually planned for tracking in more complex environments to be completed today, and that did sort of happen with the Pachinko machine; however, this has been at the cost of the cutting-between-different-shots system, which I am only now working on. But I do think that holding off on cutting was the right decision: I want to make sure that individual shots are handled well before I move on to multiple shots.

On the other hand, some components turned out to be simpler to do than I was expecting. This didn't happen as often as I would like, but I'll take what I can get! The most important feature was that having multiple cameras (and multiple viewports) ended up being decently simple to implement—when I first started, and didn't really know Unity well, I was worried that this would be a pain. I am glad that it wasn't. While my original idea was to have cameras being added and removed as events occurred, I'm thinking now that it would make things unnecessarily complex, and the GUI would be really confusing if viewports kept popping in and out. So for now, I'm just working with 3 cameras at all times.

I like making lists. Let's make a couple:

Things I've done:
  • A mouse/keyboard camera system that mimics Maya or Unity's Scene view (allowing the user to pan, rotate, and zoom at will)
  • A GUI which displays the viewports of multiple cameras within the scene
  • Keyboard commands for the user to cut between each camera on the Master Track viewport
  • The ability for objects to determine when they are involved in an event
  • Visual representations of cameras within the scene
  • Cameras that automatically choose a suitable (not in collision with geometry, and with the target unoccluded) initial position when filming an event
  • Cameras that can follow an object through complex environments without colliding with the environment
  • Cameras that can follow an object and keep it visible, without resorting to locking it in the center of the frame
  • A Director which assigns the cameras to cover different events based on whether they are already engaged
  • Very basic behavior trees for the cameras and Director

Things to do:
  • Implement cutting to different shots (obeying editing laws like the 180-degree rule)
  • Behavior trees for different types of situations (crowd scenes, dialogues, etc)
  • Priorities for events
  • Allow user to set constraints on cameras
  • Implement genres for cameras (action movie, romantic comedy, documentary, etc)
  • Figure out how to export video footage from the Master Track
  • Continue improving camera movement

So let's talk milestones. The most important remaining objective is getting the cameras to cut between different shots. Once that's working, I've pretty much accomplished the most basic idea of my project: cameras that automatically make a "virtual documentary" of the events around them. Obviously it won't be an interesting documentary, since the shot choices won't be that exciting, but it will still be a documentary. For the upcoming Beta Review, I'm hoping to have camera cutting implemented (along with constraints on some cinematography maxims like the 180-degree rule, the 30-degree rule, and so on).

After that, it will be time to actually make things visually interesting: more advanced behavior trees so that the cameras can react to different types of (more complicated) situations. The priority will obviously be events with multiple actors involved, like handling dialogue scenes with a shot/reverse-shot idiom. So by the time of final presentations, the cameras should use behavior trees to create much more stylish movies—users should also be able to select the cameras and set certain options like genres. Finally, a user should also be able to export their Master Track and watch the actual movie that was produced by the combination of the cameras.

Assuming everything else is completely 100% four-stars Oscar-winningly perfect by that point, remaining time will be used for adding more complex idioms, crazier genres (a high-contrast film noir one would be the best), and other fancier things.

So there's my to-do list, my did-do list, and my major upcoming milestones. Fun (kind-of-scary) stuff!

Live Free or Roll Hard (AKA: The Pachinko Movie)

Since last week, I've done some further tweaking of the tracking algorithm, so the cameras are doing a decent job of following their targets. I didn't have a movie last time, so let's start off with some video footage! Notice how the cameras try to keep the target within the center of the frame without abruptly locking in; they also move in and out a bit to stay on course while avoiding obstacles. It's kind of working like a spring system, with a desired orientation towards the target and a desired position behind it—the desired position can also be put on the side, for instance, to track alongside the ball.

While the improved camera tracking is pretty visible in the video, most of what I worked on in the past week was under-the-hood: specifically, I've been making the code more general (to make more complex additions easier to implement), and gearing the cameras up for cutting to different angles.

The major change is that simple behavior trees are now involved. Just importing all the right assets and files into my Unity project ended up being more of a hassle than I was expecting, but I got that done. There's now an actual Director too, who is currently just using a behavior tree when deciding which camera to assign an event to—you can see this in the video above, where the Director assigns the first occurring event to the red camera, the next event to the green camera, and so on.

The second major change (which, again, you can't see in the video) is that I redid a lot of the event and camera code, to make everything much more general: now you just tell the spheres what constitutes an event for them (in this case, it's being mobile), and they just tell the Director when they are engaged in an event, and when that event ends (in this case, when they become stationary). The spheres don't have to deal with the cameras at all, and the Director does all the mediating.

Since I think tracking is pretty solid, and I've got the start of behavior trees, next week I will play around with having the cameras cut to different angles. Surprised at how short this post is? Stay tuned for my self-evaluation!

Thursday, March 17, 2011

Follow the Leader

Well, I'm back from break, and you know what that means, right?! Right? Hello? You're going to have to yell louder at your computer screen—I can barely hear you.

What it means is that it's time for more updates. The major thing that I've been working on is getting camera navigation to work well in complex environments, because cutting to different angles won't matter much if the individual shots are incoherent. To work this stuff out, I built a sort of pachinko machine for a ball to roll down. I also gave it walls and a ceiling, to force the camera to track the ball between the pins—no cheating with shots from the outside, like I was doing with the last ball rolling down a hill. Here's the view from the top of the slope:

The first problem became clear right away: the camera was cutting to a position above the entire thing, unable to figure out how to get inside. Here's a shot of the red camera getting a spectacular view of... a gray ceiling.

This is a major issue: even if tracking is working fine, if the initial choice of camera angle is poor, the entire shot will be messed up. I didn't want to just have the initial shot be some offset from the target, because the camera could end up inside another object, or the target could become occluded. It also wouldn't make sense to try and just find a position for the camera where the target is visible, because there are infinite shot choices.

For now, I ended up having the camera start at the position of the target, making the camera into a rigid body, and adding a force to push it off in one direction to determine it's initial position (before it starts following the target). That way, the camera can't end up inside another object, because it will bounce off anything it hits; the target should also start out unoccluded, since the camera started at the target's position. I'm trying to figure out how to handle the collisions exactly, since right now the camera is capable of bouncing other objects away too, which is not good. But for now, this works for determining an initial position.

Now the camera is at least starting inside the pachinko machine, so that's good! It has it's own collision detection, so it won't go through the pegs while using a tweaked version of the SmoothFollow script in Unity to follow the ball. However, while it continues to keep the ball in the center of its sights, it sometimes circles all the way around it, which gets really dizzying. Having the ball locked into the center of its view is also really jarring, especially when the ball makes sudden stops. Here's the camera shooting the ball from the wrong side:

What I ended up doing was adding a way to change which side of the target the camera should be tracking on: you can track from behind, from the side, from the front (so the camera is moving backwards), or any angle in between. I pretty much redid most of the follow script too, so that the camera is more free to avoid obstacles, while constantly trying to get back to a certain position with respect to the ball (in front, behind, etc). It's sort of working like the 180 degree rule, where the camera just stays on one side of the ball, and will only rotate around to the other side if the ball reverses direction. Here's a shot of the camera slaloming through the pegs to follow the ball:

I also tested out multiple cameras on the same object (which probably wouldn't really happen, since one camera would just be cutting between these 3 different angles). Here's the usual Ball Rolling Down A Hill, but with 3 cameras tracking along it—one behind, one in front, and one on the side. The reason that the blue camera is at more of an angle is because it was tracking behind, and had to spin around once the ball hit the wall. I'm working on getting the cameras to get closer and further from their targets as well, since that will help to avoid collisions while keeping them in view.

(Sorry for all the screenshots: I actually just discovered that my Camtasia trial version expired, so I'm going to figure out a new way to make videos.)

Anyway! Next week, it's behavior tree time: since the cameras are decent at handling single shots (though obviously I'm going to continue fine-tuning), I'm going to get them cutting between different angles. To get started with behavior trees, I'm going to make it very simple—It will probably just be things like cutting between establishing shots, tracking shots, and close-ups. But it will give me something to build off of.

~ Fin ~

Thursday, March 3, 2011

Coming Attractions

I had a pretty busy week of projects and midterms this week, so you'll have to wait for the world premiere of the next installment of the "Roll Hard" series.

Previously on "Intelligent Camera Control":
Last week we had alpha reviews, and I got some great feedback, so thanks to everyone on the review panel! I've updated my Design Documents to reflect the changes to my approach (primarily the fact that behavior trees are a bigger part of my project than I originally envisioned, because I didn't know they existed before), so you can check that out below.

Rated 'G' for "Genres":
During the Q/A portion of the alpha review, I talked more about "camera genres." Originally I was thinking of this as a pie-in-the-sky future addition to the project, and it's still something that I'll work on after I get everything else working, but it's definitely something I want to do. The idea is that you can set individual cameras to film in certain genres, which will change their behavior—for instance, an Action camera would cut often and have lots of shaky-cam, while a Romantic Comedy camera would have longer takes and use soft focus. To implement this, I'm thinking that the cameras will have variables for different traits that go from 0 to 1—they'll be traits like cutting speed (long takes vs. quick cuts), mobility (stationary vs. hyper), focus (deep vs. shallow), and so on. Within the idioms of the behavior tree, the camera will use these values when making decisions. Then the genres will just weigh things differently. Of course, maybe certain genres can have certain specific traits—a Film Noir camera that shoots in high-contrast black and white, maybe?

Give a Hoot, Read a Book:
Another major change to my Design Document is the addition of a new reference: "Real Time Cameras," by Mark Haigh-Hutchinson (thanks, Joe!). While this book focuses mainly on video games (so it involves much more input by players than I'm working with, and also focuses on specific things like cameras for 2D games and first person shooters), it seems like it will be extremely handy for getting a camera to follow events while avoiding obstacles and keeping the event unoccluded. It includes pseudocode too, which is always nice! Plus, it's all about camera control in real time, which already makes it more useful than many of the other papers I'm using as references. I'm currently reading through it, so I'll post more updates as I come across more useful things.

Next time on "Intelligent Camera Control":
Currently cameras are just using tweaked versions of Smooth Follow and Smooth Look At to film events. I've been building a more complex environment of tunnels, turns, and obstacles, so my next goal is to get the cameras to handle mobile events better: this involves preventing the camera from colliding with obstacles, keeping the target event unoccluded, and figuring out the best path to take. I'm going to read through "Real Time Cameras," and take another look at some other papers—"Visibility Transition Planning for Dynamic Camera Control" ([OSTG09]) seems like it will be useful, since it's all about moving a camera while keeping a target in view. Once cameras can handle events in single shots properly, I can finally move on to cutting and idioms!

Anyway, Spring Break is next week, so updates will slow down for a bit. But after that... Well, I don't want to overhype "Live Free or Roll Hard," but let's just say that it will be the most amazing video you've ever seen of a sphere rolling down a hill through a curvy tunnel into a wall. Probably.

Thursday, February 24, 2011

Roll With A Vengeance (Now With GUI)

It's your lucky day! I've been busy preparing for my alpha review tomorrow, which means I don't have time to write up a long and witty blog post.

First, remember my GUI? No? Fine, here it is again.

Okay, now get ready to experience this video. Things to notice:
- The GUI looks like my original GUI drawing! On the top left is the master track, which shifts when you press 1, 2, or 3 on the keyboard. Next to the master track are views for the three cameras.
- Speaking of which: multiple cameras! Right now I'm thinking that it would be jarring if the cameras appear and disappear as events occur, so it might be better to just have a set number of cameras which are always there—maybe the user can set the maximum number of cameras himself, and if too many events are occurring, the user can set priorities/rankings for different events
- The user can now navigate freely through the environment, with mouse/keyboard commands similar to Maya. This ended up being more of a pain than I was expecting, but it works now.
- You can see the cameras in the environment now, represented as little boxes. They're also color coded by the backgrounds that appear in their views.
- Two events occurring: ball rolling down some slopes, and ball bouncing up and down. Two different cameras cover them, and you can switch between them on the master track.

Ignore the long stretches of video where nothing seems to be happening. I'm using this during my alpha review, when I'll be speaking during it. So while you watch, just imagine me talking (or singing) in the background.

I can't wait for "Ball Rolling Down A Hill Into A Wall: The Movie: 4: Live Free Or Roll Hard"... it will surely be exciting!

After that I'll have to switch to another franchise. Suggestions?

Thursday, February 17, 2011

Roll Harder (Now With 100% More Splitscreens)

I've been incredibly busy with other work for the past week, so this post will be uncharacteristically short. Consider yourself lucky! I'm sure next week will see the return of my overly-long and excessively-wordy blog posts with unnecessary illustrations.

Sorry, I'm already getting overly-long with this one.

Anyway, I did manage to figure out windows-within-windows! Now when the smart camera is active, a window pops up to show what it sees. I also wanted to learn GUI text, so I stuck in a status which tells you whether an event is occurring or not. Again, for this test, the event is simply "the ball is moving." Of course, I didn't want the camera to quickly appear and disappear whenever the ball jitters a little bit, so I've made an easily adjustable variable for how long the ball must have been moving for it to be considered an "event." Likewise, you can determine how long the ball must be immobile before it's declared that the event is over.

As I've said, the user should also be able to control the main camera as if they're using Unity normally (or Maya, since it has the same controls). As far as I can tell, there's no included script for this which can be attached to the main camera... I may have to do that myself. For now, you can right click and drag the view around a little.

Enough of my yammering. Let's see some video!

So there you have it. You can see me move the main camera around a bit at the beginning of the video. Pressing 'x' on the keyboard makes the barrier disappear, causing the ball to start rolling, and the smart camera to start tracking it. Watch the camera view at the bottom, but you can also see the actual camera moving in the top. Exciting stuff.

Next week it's time for alpha reviews! I'm going to try out more complex events, to see how the camera responds to them (like when things become occluded)—right now the camera is just using the SmoothFollow script which is included in Unity, so I will start editing that to make it behave more interestingly. I really want the camera to handle single shots well before it starts cutting to multiple angles.

Time to get to work on "Ball Rolling Down A Hill Into A Wall: The Movie: 3: Roll With A Vengeance"

Thursday, February 10, 2011

Ball Rolling Down A Hill Into A Wall: The Movie

Well, I've created my first cinematic masterpiece in Unity! I call it "Ball Rolling Down A Hill Into A Wall: The Movie"

Okay fine, it's not quite a must-see, fun-for-the-whole-family, laugh-a-minute thrill-ride. But I have gotten a much better handle on Unity in the past few weeks. I read a lot of tutorials and watched a lot of video lessons, but easily the most helpful were on Unity 3D Student: these videos are short (2-4 minutes) and very to the point, so you know exactly what you're about to learn, and it's very easy to follow along. Plus, the guy has a British accent, which makes him sound smart. If you're learning Unity for your project, I highly recommend these.

Anyway, after learning all about adding components to objects, writing scripts, and switching between cameras, I put together an extremely simple test: a ball on a slope, a barrier preventing the ball from rolling down, and a wall at the bottom of the hill. I've got a video uploading to YouTube right now, but until then, you should be able to follow the plot through these screenshots.

The main camera simply watches the entire scene from the perspective view (in the screenshot below, the top window is the Scene view of the entire environment, and the bottom window is the Game view which is showing the perspective of the main camera).

The Ball is a rigid body affected by gravity, so it is leaning against the barrier. Pressing 'x' on the keyboard makes the barrier disappear, which causes the ball to start rolling down. More importantly, once the ball starts moving (a simple "event"), a new camera is created, set to be the active camera in the scene, and ordered to follow the ball. In the below screenshot, you can see the 2nd camera in the Scene window, and the perspective of the Game window has been changed to this new camera.

The 2nd camera follows the ball as it careens down the hill, slams into the wall, and comes to an explosive halt. I might be making it sound more exciting than it actually is.

Once the ball stops moving, the "event" is over, so the view of the Game window shifts back to the main camera, and the 2nd camera disappears.

So there you have it! Obviously this is all pretty simple, but I think I'm off to a good start. I played around a bit with GUI elements as well, so I'm going to continue with that for next week: remember, the Game view should just be a perspective view (controllable by the user), and clicking on the smart cameras should make little preview windows show up (just like the little Camera Preview in the Scene window above). So hopefully I'll have those windows-within-windows next week.

Coming soon: "Ball Rolling Down A Hill Into A Wall: The Movie: 2: Roll Harder"

Wednesday, February 9, 2011

Cinematography Basics: Zoom and Enhance

Tomorrow I'll have a post up about Unity, but let's get some other camera techniques out of the way: zoom and focus. Both of these also contribute to framing, which will be one of the main challenges that I'll have to handle with my intelligent cameras.

Zooming involves adjusting the focal length of the camera. It's probably not something that I'll be dealing with a lot, since my cameras can just move forwards and backwards in space if necessary. There is one cool thing that zooms can be used for, though it's not really a high-priority for me to implement: the so-called "Vertigo Shot," which you may know from Hitchcock's "Vertigo" (or maybe "Jaws"). The idea is to move the camera forward while zooming out, or move the camera backward while zooming in. That way, it makes it seem the distance between the foreground and background is shrinking or stretching. Here's a video with some examples of it used in famous movies:

Focus has to do with the camera's depth of field. I think it would be cool to play around with this: establishing shots should probably have a deep focus so everything is clear; once the camera moves in tighter, objects tied to the event can be kept in focus, while the background can be blurrier. There's also a technique called "rack-focus," where the focus shifts between multiple planes. If there's ever a time where a camera is framing one event, and a second event occurs within the frame but in the background, a rack-focus could be used to direct the audience's attention.

So now we've talking about getting things into the frame. But once they're in there, composition is important too! Here are a few rules:

Don't cut off people at the ankles, knees, or neck. It just looks odd. Still, it will be difficult to tell if objects are characters, so it will be tough for the cameras to avoid cutting people off... to avoid this, my cameras will probably always start with establishing shots, and then make sure to to keep a large chunk of the object in frame when doing closer shots. If only a small part of the object is in frame (like in the "bad" example below) things look weird.

Let the actor lead. If a camera is following a character, it should wait for the character to start walking before it begins panning. If the camera moves prematurely, it can be jarring and pull you out of the movie, because it's obvious that the cameraman started moving in anticipation of what he knew would happen next. This shouldn't be a problem for my project, because the cameras don't know what's going to happen next anyway, and are always waiting for their targets to make the first move. I've sometimes heard that if the camera and character are both moving, the camera should stop before the character does… But I don't think this is as glaring as the start of the action. Besides, my cameras wouldn't know to stop early anyway.

Use negative space. Framing things in the dead-center of the composition is boring. But this is especially important with movement: if a character is walking and the camera is tracking next to them, the composition should leave empty space in from of the character. You really don't want the character to look like they're walking off the frame, or for it to seem like the camera isn't keeping up. Even when a character is still, the camera should leave space in the direction that they're looking. If this rule is being broken, there's probably a reason for it…

…Like maybe you're watching a horror movie. Once you know to look out for this, you can ready yourself for lots of cheap jump scares.

Sure, it's usually just a cat jumping out. But at least you know that the cameraman was shooting that way on purpose.

Next time on Cinematography Basics: Editing and cuts!

Thursday, February 3, 2011

Time Off for Good Behavior Trees

One of the main issues that I'll be trying to solve is how to make the cameras behave like... well, cameramen. When an event occurs, filming the entire thing in a single wide shot would work, but it wouldn't be very cinematic. Instead, the cameras will move and cut to different angles. There are rules of composition and editing that I'll get to in another post, but let's first talk about choosing shots in the first place. I originally had a vague idea of using some sort of finite-state machine, until Joe pointed me in the direction of behavior trees—specifically, a chapter in Artificial Intelligence for Games, by Ian Millington and John Funge. It seems like behavior trees will work well for my cameras. So thanks, Joe!

Let's get started by looking at this wacky diagram of a basic camera behavior tree:

This tree describes how a single camera might behave. At the top of the tree is a Proceed Loop followed by a Selector—in simpler terms, this means that the camera is in a constant loop of "selecting" which of the two subtrees (left or right) to follow. Conditions, the white rectangles in the tree, test something and then return either true or false. A selector will go through each of its children, and stop once one of them returns "true."

Essentially, the left subtree is behavior for when the camera is not assigned to an event, and the right subtree is behavior for when it is assigned. The Parallel box means that its children execute simultaneously—so in the left subtree, the Assert box will constantly check to make sure that there's no event assignment, and the camera will "wander" as long as that assertion is returning true. The little "wander" triangle represents a Sequence: if you think of Selection as "OR," then think of Sequence as "AND." Whereas a Selector would try children until it finds one that returns true, a Sequencer would go through successful children and stop once it finds one that returns false. In this case, the children of Wander would be different actions that cause the camera to move sort of randomly through the environment.

The right subtree is the interesting one. It starts with another Parallel box, which makes sure that it will execute while the camera is still assigned to an event (once the event ends, we should revert back to wandering around). So what are we executing though? Another Selector, one which decides what kind of situation we're in. Many of the papers that I researched use the term "idiom" to refer to a series of individual shots for a certain purpose; many idioms involve the number of actors in the scene. Thus, a simple way to break this down further would be into behavior idioms for situations with only one actor (when I say "actor," I may be referring to any sort of object, not just a character), situations with two actors, and situations with more than two actors. Obviously there are more idioms than this, but this will be a good start.

This Selector (the one in the right branch of the tree) will be important. I'm going to call it the Cinematographer Node, since it plays the role of cinematographer: deciding what kinds of shots to use based on the current situation.

So let's say there are two actors in the scene, and the Cinematographer goes down the middle branch. As long as the event still involves 2 actors (by the assertion), the camera will enter the Shot / Reverse Shot idiom, which I've expanded at the bottom of the diagram. This idiom, often used in movies during conversations between two people, would involve the following series of shots:
  • A wide shot with both characters in frame
  • An over-the-shoulder (OTS) shot of character A (meaning a shot taken over character B's shoulder, so we see character A's face)
  • The reverse shot (an over-the-shoulder shot of character B)
  • A close-up of character A, without character B in the frame
  • A close-up of character B, without character A in the frame
  • Back to a wide-shot

This cycle will repeat as long as there are still 2 actors involved. If a third actor joins the event, the assertion will fail, and the Cinematographer will go to the Crowd Shot idiom. Similarly, if actors leave until 1 actor remains, the Cinematographer will go to the Follow idiom, which involves filming the lone actor by itself. If each actor is still labeled an event, we may need extra cameras to follow the individual actors, since they can no longer be contained in one frame. Once it's determined that a lone actor is no longer an event (sorry, lone actor, you're not important anymore), the entire right subtree would fail, and we'd go back to wandering. If extra cameras were added, they might just disappear.

One reason that this behavior tree structure will work well is that once it's set up, it will be easy to add more complicated idioms later—the Cinematographer Node will be happy to have more specific situations to test for. Right now I've got a test environment built in Unity, and I'm going to set up some simple, 1-actor events. The first version of the camera will only have a Follow idiom: for testing, this will involve cycling between a wide shot of the object, a moving shot (which would follow the object if it's moving as well), and a close-up of some feature of the object. After that, I can add idioms for dealing with multiple actors in an event.

There's one major thing missing from the behavior tree, however: while this tree will cause the camera to choose the right types of shots for each situation, it doesn't ensure that the shots are composed well. Worse, it doesn't prevent the camera from breaking cinematic rules while cutting between shots. I'll get to these principles of composition and editing in the next post—but basically, the idioms will need to have asserts inside of them, rather than just being a sequence of shots.

Thursday, January 27, 2011

Rough GUIs and Video DJs

I've been doodling a few ideas for how the GUI may eventually look. Here's a rough one:

The idea is that you (the user) should have a large view of the environment, and be able to navigate freely (the controls will probably be similar to Maya's, which work pretty well for 3D navigation). You should also have small views of all of the smart cameras, as if you're in the control room of a news show (in the above sketch, these show up at the top of the screen. There will probably be a limit to the number of smart cameras you can have at once). Clicking on a camera should open a window with options specific to that camera, so you can give it commands or constraints which will override the default "intelligent" behavior.

I've been thinking a lot about how all of this can actually be exported, so the "virtual documentary" that I keep talking about can be watched later, and not just while the events are happening. One possibility would be for every camera to export its own footage as some sort of video file, so a user can watch any of them by themselves, or edit them together in Final Cut or Premiere or something.

But I really like the idea of editing the movie in real-time. While the individual cameras will be smart enough to cut to different angles, I think the user should be in charge of actually cutting between different cameras.

Basically, the user would be a "Video DJ." There's one camera window, let's call it the "master track," which is the one whose footage is actually exported when the program is closed. The master track is always linked to one camera, and shows the footage that it's capturing—in the GUI diagram above, the master track is the top window all the way to the left, and it's currently linked to camera 1.

By pressing a keyboard button (maybe the number keys would be tied to numbered cameras), the user changes which camera is being recorded on the master track. This way, while each individual camera is capturing a different event, the user can cut between them at will, like a DJ cutting between different records. There could even be different keyboard commands for different transitions, like fade-outs, wipes, dissolves, and so on. Or a user could pick some sort of default setting, like having the master track just alternate between cameras every couple of seconds.

Anyway, all of these recording things are just thoughts. The first version of the GUI will simply allow the user to navigate around and click on events, while a window will display a smart camera's view. DJ'ing will have to wait.

Wednesday, January 26, 2011

Let's Learn Unity

My project will run in Unity, a game engine which you can get here. So far, I've just been playing around with the free version, which has a solid amount of features in it—I may move over to the full-version later, if necessary.

After messing around with it for some time, it seems like Unity is a pretty cool program! They provide the skeleton of a platformer game to help you learn the interface, so I've been using that as a tutorial. Here's what Unity looks like when you open up the platformer demo:

Unity... IN SPACE.

This is the "Scene" view, which shows the entire map. When you hit the play button at the top, you switch to "Game" mode. In my project, the Game mode will actually look kind of similar to the Scene mode, because the user will be able to fly around the map wherever they want, trigger events, and play with their smart cameras.

Speaking of cameras, see that "Camera Preview" window in the bottom right? That shows what the camera (which is currently selected) is seeing. I'm going to start working on getting something similar to work when the project is actually running: while the user is doing their thing, smaller windows should show the views of the smart cameras. The user should also be able to select a camera and get a little window of options, so they can give the camera specific commands.

Programming in Unity involves programming lots and lots of scripts in C#. When I first tried to edit a script, it opened up in this awful editor called Unitron.


Thanks a lot to Nathan for showing me how to open scripts in Unity's MonoDevelop instead. Turns out that in Unity, you can go to Assets->Sync MonoDevelop Project. That way, edits made in MonoDevelop automatically synchronize in Unity.

Too bad "Unitron" sounds like an awesome robot, while "MonoDevelop" is boring.

Now that I've figured out how to edit scripts and apply them to objects in Unity, it's time to make a test environment and set up some simple tests that a user can trigger. It's really simple to import Maya files into Unity—I tested it with a model of a penguin wearing a top hat and monocle, of course.

Penguin... IN SPACE.

Once I build an environment in Maya, I'll work on simple scripts for things like making boxes move when a user either clicks on them or presses a keyboard button. Meanwhile, I'll be trying to make a Camera Preview window actually appear in-game. Exciting stuff!

Tuesday, January 25, 2011

Cinematography Basics: Panning and Tilting and Tracking, Oh My

I've been playing around in Unity to get a feel for how this game engine works, so soon I can get started on building a very simple environment and moving a camera around in it. Later this week I will post about my Unity progress, but I figured it couldn't hurt to do some primers on cinematography! My project is all about virtual camerawork, but I've done a lot of live-action filmmaking, and the same rules and terminology apply. Plus, I love drawing little diagrams, and everybody loves blog posts with pictures.

First up, basic camera moves. Let's break it down into three basic actions: Pans, tilts, and tracks. In general, most complex camera moves are just combinations of these elements.

(Note: In some cases, I've heard certain terms be used differently by different directors, in different filmmaking books, on different websites, and so on. At the very least, these primers will make clear what I mean when I use these phrases, so you'll know what I'm talking about in later blog posts)

A camera pans when it rotates horizontally, either from left to right or from right to left.

The camera rotates vertically, either from up to down or from down to up.

In a tracking shot, the entire camera moves, rather than the direction it's pointing in. I've actually heard of this being divided up into "pedestal shots" (where the camera raises or lowers, as if it's on a tripod), "dolly shots" (where the camera moves forward or backward, as if on dolly tracks), and "trucking shots" (where the camera moves left or right). But for simplicity's sake, I'm going to lump them all into "tracking shots," where the camera physically moves. You can also think of this as a "crane shot," as if this the camera is on a crane which can move it in any direction.

These elements are great enough on their own, but boil 'em together and you get 6 delicious degrees of freedom, allowing your camera to position and orient itself however it wants.

Next time on Cinematography Basics: Zooms, focus, framing, and composition!

Monday, January 24, 2011

Design Document

I know what you're thinking: "That abstract was pretty cool, Dan, but is there any way you could express the same ideas in a much longer and more detailed form, with lots of references to other papers and specific plans for your implementation? And while you're at it, could you include some sort of Gantt chart? Please phrase your response in the form of a design document."

So how did I know exactly what you were thinking? It is both a gift and a curse, and I would rather not talk about it now.

Regardless, my design document can be found below! (If the embedded version doesn't work you can also see it at


Thursday, January 20, 2011

Introduction and Abstract

Welcome to the exciting world of intelligent camera control! This blog will follow the progress of my senior design project, so stay tuned for updates.

Before I get into the nitty-gritty of virtual cinematography and how I plan to implement it, let's start at the very beginning. Here is the abstract of my project:

Users often view virtual worlds through only two cameras: a camera which they control manually, or a very basic automatic camera which follows their character or provides a wide shot of the environment. Yet real cinematography features so much more variety: establishing shots and close-ups, tracking shots and zooms, shot/reverse-shot, bird's eye views and worm's eye views, long-takes and quick cuts, depth of field and rack-focus, and more. For my project, I plan to implement an "intelligent camera" for use in a virtual 3D world using the Unity game engine: As events occur in real-time, this camera will automatically choose shots to depict them, and essentially create a "virtual documentary" of the events as they happen.

The camera will need to position, pan, tilt, zoom, and track as necessary to keep an unobstructed view of events in frame, while obeying traditional standards of cinematography (following the 180-degree rule, avoiding jump-cuts, etc). An artist can also play the role of "director" and give the camera instructions, such as ordering it to place more priority on one event over another, drawing a certain path for the camera to follow, and moving the camera to different vantage points (to mimic a helicopter or crane-shot, for example); when not being given specific orders, the camera will revert to its automatic "intelligent" state. Ultimately, a user should be able to place multiple intelligent cameras in the 3D world, trigger events to occur, and then cut between the cameras to watch a well-shot "documentary" of the events be constructed in real time.

Coming soon: My design documents, which more specifically detail how I plan to go about my project. Also, because this incorporates many techniques from shooting live-action film, I'll explain any terminology and concepts of real-life cinematography that I will be referring to later.