Welcome to the exciting world of intelligent camera control! This blog will follow the progress of my senior design project, so stay tuned for updates.
Before I get into the nitty-gritty of virtual cinematography and how I plan to implement it, let's start at the very beginning. Here is the abstract of my project:
Users often view virtual worlds through only two cameras: a camera which they control manually, or a very basic automatic camera which follows their character or provides a wide shot of the environment. Yet real cinematography features so much more variety: establishing shots and close-ups, tracking shots and zooms, shot/reverse-shot, bird's eye views and worm's eye views, long-takes and quick cuts, depth of field and rack-focus, and more. For my project, I plan to implement an "intelligent camera" for use in a virtual 3D world using the Unity game engine: As events occur in real-time, this camera will automatically choose shots to depict them, and essentially create a "virtual documentary" of the events as they happen.
The camera will need to position, pan, tilt, zoom, and track as necessary to keep an unobstructed view of events in frame, while obeying traditional standards of cinematography (following the 180-degree rule, avoiding jump-cuts, etc). An artist can also play the role of "director" and give the camera instructions, such as ordering it to place more priority on one event over another, drawing a certain path for the camera to follow, and moving the camera to different vantage points (to mimic a helicopter or crane-shot, for example); when not being given specific orders, the camera will revert to its automatic "intelligent" state. Ultimately, a user should be able to place multiple intelligent cameras in the 3D world, trigger events to occur, and then cut between the cameras to watch a well-shot "documentary" of the events be constructed in real time.
Coming soon: My design documents, which more specifically detail how I plan to go about my project. Also, because this incorporates many techniques from shooting live-action film, I'll explain any terminology and concepts of real-life cinematography that I will be referring to later.
Great idea Dan! So say the magic camera is watching something at point A, and then suddenly a fire happens at point B (far away, say 100m). Is the intended behavior for the camera to very quickly pan/track a long distance to focus on the new event? Cut to the new event? Or is the scope of this project only meant to concern smaller environments?
ReplyDelete(Disclaimer: I only read the abstract.)
Great project idea and nice job with this blog so far.
ReplyDeleteAlthough I haven't read it myself yet, you may be interested in the book Real-Time Cameras.
(I realized that I totally neglected to respond to some comments, so I'm doing that now!)
ReplyDeletePatrick: Thanks for the link! That book definitely seems like it might be useful.
Lu: Currently, a 2nd camera would be created to cover the fire. In another post, I kind of compared the user to being a "Video DJ": you can cut back and forth between multiple cameras, while each one is only filming one thing. Alternatively, the user might not want to worry about cutting, so they can have a limited number of cameras and set Priorities to different events: if a high-priority event occurs, a camera that's filming something boring would abandon it.