As per usual, anything said during the show is subject to change by CIG and may not always be accurate at the time of posting. Also any mistakes you see that I may have missed, please let me know so I can correct them. Enjoy the show!
They needed lightweight interactible seats but only had the (very complex) pilot seats
Quickly grew to include other things: first case wasn't a seat at all but the hacking laptop
The AI doesn't know how to sit in a chair so the chair has to tell them, step-by-step what to do
Involved different disciplines to improve workflow e.g. replacing alignments with dynamic enters/exits
Created a process for requesting usables and categorizing them between "can do" and "need tech"
First test case for AI was the mess hall table: 8 actors can potentially use it at the same time
Stacking behaviours: using a useable (e.g. glass) which is on a useable being used (e.g. table)
Making usables feel real and natural comes down to having variation in the animations
They approach the problem from what it takes to make it look good and feel alive, but that requires thousands of usables, and so must be balanced with production demands and cost
Sandi Gardiner (SG): Hello and welcome to another episode of Around the Verse. Our weekly look at Star Citizen’s ongoing development, I’m Sandi Gardiner.
Chris Roberts (CR): And I’m Chris Roberts.
SG: On today’s show we examine our expansive usable system and look at what it takes for a player to pick up, put down and even sit in an object in game. As well as whether we break the record for most mess hall animations in one episode.
CR: We probably will but first let’s check in with the team for the latest bugs and blockers affecting 3.0 in this week’s Burndown.
Eric Kieron Davis: Welcome back to Burndown, our weekly show dedicated to reviewing progress on issues blocking issues on Star Citizen Alpha 3.0. Last week we ended at 7 total must fix issues which were prioritized as 3 blockers, two criticals, one high, and one moderate. So let's check in on progress this week.
Wilmslow Studio Sept. 22nd
Matthew Webster (MW): So what we’ll do is pass this list of Jiras over to… this list of bugs over to QA. We’ll gather up the bugs for them and put any new ones in. Pretty much all my notes for this are going to be a list of here of the bugs we highlighted through our playthrough and then I’ll call up the ones that look like they’re going to need to be a PTU must fix and then call up Erin to make a decision on which ones of those if he agrees with them all or which one he doesn’t.
This morning the doors in the... they usually have doors in Grim Hex don’t open. It’s been assigned over to Matthias, it could be a DBA animation issue. The rest of them all for the most part are all backend related.
Earlier today though we had a gameplay review with production leads and directors. The idea of that was to go through what we want to do for Evocati, get up out of your bed in Olsar, get into a ship, go around and explore, do various things of traversals and some Quantum Travel jumps, do some exploration. We wanted to really make sure that what we were looking for was any new bugs that appeared that no one had bugged yet, just make sure that we got absolutely everything we wanted to get fixed for the Evocati and I’m glad to say that we actually only found three bugs, all of which we knew we needed to fix for the Evocati so that was really good.
Wilmslow Studio Sept 25th
Erin Roberts (ER): The important stuff is for Joe to fix his stuff, I don’t think outside of that any of those are blockers for going to Evocati so just see once we get the persistence stuff fixed up we can probably go with what else we’ve picked up. I did see an issue when I was playing today, not sure if it was just the Cutlass or all ships, the UI is completely blurry and the colours on the UI are just absolutely insane. So I just want to make sure that’s just a Cutlas issue and not all ships.
LA Studio Sept 26th
CR: I have one thing from the Engineering meeting, we were discussing the object container… sort of opening up and editing insitu especially for lighting. So, you know, Ariel’s being doing some work… Steve Humphreys he kinda does Roger’s tail work but I think there’s a disconnect between Engineering stuff which we have some tasks that we’ve just been working on by themselves versus what the users need and want. So we were talking about trying to put a little bit of focus on that because I think the problem is that like it seems that they’re done but the… like doesn’t even know about, right? Then maybe some of the things weren’t done in a way that would be good for the artists anyway, so what we need is someone who can try a bit of that process that can straddle between the technical side and artistic side and I volunteered you and the Engineering lead.
Sean Tracy(ST): Yeah, sounds good and again sort of like we’ve talked about on the project director, Derek and I, have to sync up because ultimately of the tools programmers are under Derek, right? Well, not all of them but the two of the four…
CR: I mean it was basically would be, it would be… what should happen is finding out what would make it easier for Ian or Chris Campbell or Zane or Hannes or whatever and making sure that we’re doing the things because we have these tasks. I mean Ariel’s been working on cross object container editing and he’s QT windows to bring up and stuff but I think alot of the… on the content side people are not even aware of… like I think it was only today they’re like, ‘oh I can help with some of this stuff’. I think there’s some issues opening it up and they want to fix it but… so what I would like to do is sort of have someone’s who’s driving that process and I think as the global content sort of director that’s kind of your job to straddle it to translate for Engineering what they need to do for the tools and stuff. Making sure you’re listening to all the needs of the content creators cause I think we’ve had a bit of disconnect, it’s like we need these things so Jiras get made and people sort of work on the Jiras…
ST: No, what’s worse is that…
CR: In abstract and it doesn’t have a cohesive leader like overman, right?
Wilmslow Studio Sept 22nd
Dev 1: We’re in slow mo? Ok, we’re in slow mo.
Dev 2: I think we’re in slow mo.
Dev 1: So this is the bug that Clive’s working on right now and…
Dev 2: It’ll eventually catch up.
Dev 1: This is the first time you guys have seen this in a few days, right?
Dev 2: We have been seeing it.
Dev 1: Just fairly infrequently though. Typical, typical that it happens and Vince managed to get a low repro bug for Starmap yesterday.
Dev 2: Only as far as we know.
Clive Johnson: So the clock was actually running very slowly on the client and the effects if we look at this video here is as you can see player starts moving and moving very slowly. It does eventually recover so when we have these four times that we’ve got from the ping and the pong messages we answer two sample points on a linear regression and that allows us to estimate a straight line, an offset, the slope and the relevant drift between the two clocks and the offset between them. We don’t account for latency... and we don’t account latency in these samples, what we rely on is the mathematical technique that we’re using to balance it out and the reason that we do that is because latency isn’t symmetrical. So the time it takes for a message to go from the client to the server is very rarely the time… equal to the time it takes for the message to go from the server back to the client. So that can skew our time computation algorithms a bit. So we use several samples and average it out, we use this... least squares method to try and balance it out. So the bug was actually the one of the samples, the first sample was getting rejected but then the second sample would be in so you’d have an unbalanced error. It was very large and threw the whole calculation off, it would take time for the client to recover then. So the fix was to make sure that we only ever entered the samples in pairs and if we were going to clear out the history because it’s got to diverge from the new input were getting then we only clear out the whole thing up to pairs and we don’t have a stray one left in.
Sam Child: We did a desk review with Erin and he basically sat with us, played through the game and called out issues that he thought were big things that would stop this going to Evocati. There’s only a couple issues at that then we got to go over a list that we’ve come under, things that he pointed out, we sent an email to production and got the must fix labels stuck on all those issues just so they’ve got more eyes on it for production.
Rhys Twelves: Right now I’m testing a bug that was the Gamescom bug, the big one that caused the crash on stage. So the reason the bug was happening on stage is that we weren’t freeing the tokens for each of the seats that a player may take, these are like tokens that allow a player to get into the copilot, get into the pilot seat… maybe even exchange goods at the shop. All the different things that players can do, we weren’t freeing all the tokens when clients would disconnect or crash. So the bug fix was to ensure that on a client disconnect we picked up the signal and ensured all the tokens they were taking up at the time were freed and then so any new clients that were either in the game or connected later could then retake those tokens and take the seats.
LA Studio Sept 27th
Michael Dillon: Still got the last two PTU must fixes for quad drive would no longer snap to the plane of the solar system when you jump because turns out as Mark and I found out you have a right handed coordinate system and left hand coordinate system, that changes several things. Critically marked bugs, the one we’re working on right now is the stations around Crusader near Port Olisar that you can jump to. For some reason you jump to them, you’re way off, I think it’s a tuning issue because the arrival radius is like 10,000 then the offset is another 10,000 so you end up just way off somewhere. Yeah and for something that small you should just try and arrive closer, some part of sending that back is a… just adjust the variables on then I don’t know the next thing we’ll work on. Just go down the list to the next most important looking thing.
MW: We’ve currently got two bugs left now before we go to Evocati. Yesterday we had the persistence issues that were causing a lot of grief, we’ve got fixes in for those, QA in Austin got a new build for that and tested them, they all seem to be fine. So we’re actually looking on course to have gone to the Evocati group.
Christopher Eckersley: We had a bug come in last night from ATX as we were preparing to push to Evocati, it was a server crash that happened when ships were despawning on a server. It was picked up by one of our network engineers in the UK who ensured the cache tokens were removed prior to an entity being shut down and it means the servers are now stable as we prepare for an Evocati push.
MW: When we came in this morning, we had an email from our US studio in Austin that came to us whilst everyone in the UK had gone home. This kind of outlined what the underlying cause of a few other issues we were seeing was. The main cause seemed to be that our build system for some reason hadn’t been picking up any changes in our persistent data cache over the last couple of builds which seems to have been the main reason as to why we had a few issues like all the shop items not being present at all in any of the shops. Then also once we’d run a build with the persistence cache properly updated there were a few crashes related to this because in the interim time there’d been some work that’d gone on with the persistence cache which then had a bit of a knock on effect and caused some other crashes.
CR: Yeah, I mean I don’t want to go… I don’t see there’s any better thing going if you’ve got five or six people with crashes every 5-10 minutes and done it for two hours, be a frustrating experience and you’re going to get a whole bunch of people that download it and then if the frequency of crashes is relative or proportionate to the amount of people playing, there will be a lot more… you know you’ll be having crashes every minute or two if you’ve got… cause you’re going to fill up these instances pretty quickly. I would guess every Evocati will be like this is the very first time they get to play 3.0, so they’ll be excited and they’ll download it and then they’ll be kicked back to the desktop.
LA Studio Sept 28th
Chad McKinney: What I can glean out of it right now it is very strange, it’s going to take some digging as to why this particular sequence of events has occurred.
ST: We want this to be as clear and stable as possible, it’s you’re testing the environment it’s going to be played in meeting… 3.0 is multiplayer, you test in multiplayer. You’re test connected to a server so that’s it.
CR: I guess just make sure the UK QA and the German QA is available to help people test multiplayer…
ST: Oh yeah to get it up and running.
EKD: All right I’m going to record Burndown now.
EKD: As you can see we’ve started our official go/no go sync which is the place where key stakeholders from each discipline evaluate the build with Chris and ensure we’re all in agreeance it’s ready for the publishing process. These will be held almost daily going forward. This week our issue count fluctuated up and down based on build stability, when we hit zero issues yesterday we held our first go/no go sync and we discovered a few issues we needed to fix prior to this release. We had another sync today and even with our new fixes the build remained unstable. So at the time of filming this we reduced our must fix issues by two meaning we have five issues blocking this first release, we’re getting very excited by the prospect of wider testing and cannot wait to get this massive update to the Star Citizen universe for you to play in the very, very near future. See you back next week here on Burndown.
SG: Thanks Eric, if you’re looking for more information on what bugs we’ve smashed then be sure to check out the production schedule report on our website.
CR: Now let’s turn our attention to usables which basically consists of any items that players and AI can interact with in the game. This includes everything from picking up a cup, using a laptop to hack a control point in Star Marine or even sitting at a table in the mess hall, so there you go.
SG: Devising a system that could make the experience feel real and natural while covering the vast amount and variety of usables that would appear in the game was a huge challenge the team had to tackle.
CR: Yes, so let’s take a look.
Jens Lind (JL): Usables for us started in the lead up to last year’s CitizenCon. We realised we have a lot of seats in the game and the only type of seats available to us was the pilot seats. Pilot seats are complicated. You don’t want to say “Oh, it’s a park bench” or it’s a bench in the mess hall or a bed, you don’t want to say that has the same attributes as a pilot seat. We wanted to create something simpler so we came up with this idea which was just “Oh, it’s just a simple seat - it should be lightweight and easy to mass produce”.
So we started with that and it grew to also incorporate other things like the first actual test case of this simple seat - which meant we had to change the name really - was was the hacking laptop in Star Marine. So, there’s no seat involved at all. So it became more like a useable: something you can use, and when you use it plays an animated reaction on the player. And there might be multiple ways you can use it, so you had the whole Inner Thoughts on top of it so you can select like “Oh, I want to hack the laptop. I want to close the laptop. Maybe someone closed it so now I need to open it first.” All that kind of complex behaviour.
Gregoire Andivero (GA): So in a game world it’s like talking to babies, right? When I tell you “sit on a chair” you know how to sit on a chair, ‘cause you’ve done that a thousand times. But what happens in the game world is that the AI needs to actually ask the chair how it is supposed to use it. So it’s like the chair needs to hold the data of you can sit from all those different angles, you can use those animations to do so, when you are actually sitting you can play this, this and and that stuff.
If you’re sitting and I’m attached to a table, you can pick up stuff that’s on the table. But then again while you are sitting on a chair you know there’s something on a table and you just pick it up. The AI has not conscience of that so you need to tell the AI to … “You’re on a chair, that’s linked to a table” and so the table tells you “I also have a glass on me” and the AI then has to know that he can pick it up. But we’re going to … we’re remapping all the way our brain are hardwired for the AI. It’s step-by-step so all that seems trivial is actually something we need to take care of.
So it’s like remapping every conscious choice linked to walking. So you’re putting your foot up. You’re putting your foot forward. Putting your foot down. Putting your other foot up. And you need to tell the AI to do all of those stuff one-by-one instead of just relying on its ability to just do what it’s supposed to do because there’s no such concept.
So after the failure we realised that we needed the angle of the other discipline in a more involved way than we have had up til now. So coming from the UK studio Jens Lind’s team we were joined by Jamie Visser, who is a Gameplay Programmer. We started working on the improvement of the workflow and how the animation and geometry of the useable were processed in the pipeline. And we got a few updates in that were actually speeding up the pipeline so much, such as all the alignments around the useable that I talked about, we actually read that from the animations so you don’t need all of this heavy, clunky markup which sped up the art process so much.
And from Austin we were working with Brian Brewer, which was helping us from the start in the first installation, and kept doing so creating some kind of triangle of people coming at this system from all of the their angle and trying to covering all the bases and all the cases that we needed. So we were able to optimise the pipeline for everyone because one of each discipline was involved.
With CitizenCon there was some request that we didn’t see coming such as all the repair and maintenance interaction aboard an Idris for example. Meaning you have to take a component out of certain parts of the ship, inspect them, replace them, carry them around. And all that was not ready the first time we tried to do all that. And so what we realised is that the system has to be request based and be able to cater to all of those new requests that were coming in. So we created a whole process where people can actually request usesables. We created a way to categorise those requests and identify what were stuff that we could already do - and that would go through the pipeline easily - and stuff we actually had to build some tech for. And coming out of CitizenCon there was a lot of stuff we needed tech for.
So once the start … the team was actually created it was really easily to pick from those and actually just go “Oh, okay, let’s lift that blocker. Let’s left that blocker. Let’s lift that blocker. And just go for it.
JL: So the first time you have seen a useable is the hacking panel. Then we moved on. We also converted the medipen dispenser: that’s going to be a useable - hopefully in the next release. So we’re slowly growing the player’s interaction with the usables . And then, the start of this year, designers wanted to use the usables for AI’s as well. So we started this whole new sprint paradigm where we working with Design, we’re working with prop team, we’re working with the animators: everyone together in a task force that’s going to grow this AI actors interacting with things in the world in lots and lots of different ways.
So the first case we had was the mess hall table. So we had a single big useable, like a long mess hall with eight different actors could sit - potentially sit - at the same time. You could chose to eat together like a family. They could all sit there at the same time. So that was first thing we had to do. We had to take the usables which up until now was a single entry point - like a single actor could use it - and we had to make sure that all these eight actors could potentially use it at the same time.
They can do different things in it so we’d have to grow the idle sets. So we don’t want them all to sit exactly uniform: we want them to be allowed to have a small offset, play different animations. So that was another thing that was added.
Every place on a mess able is slightly different. You can’t use the same animation to get … to sit at the end bit of a bench as you would to sit in the middle of the bench. You’ve actually got more entry points if you are coming in from the side of something than if you’re coming in from behind it as well.
So to make it feel natural you don’t want someone to have to walk around the table in order to get to the end. You want them to be able to use the shortest path like any human would. And once they get on it as well it’s the whole sort of “what do they do when they’re at the mess table?” If there’s eight of them how do you break it up so it doesn’t feel like they are literally just lined up in four slots on each side. Doing the exact same thing. Playing through the exact same sequences. How do you make it feel natural? And as well as getting off. You want to make sure that when you come out of a usable, you don’t always go out the same way. You don’t always go out behind you and then set off in your new direction. You might go out slightly to your left or slightly to your right because that’s where you want to go next.
And then I think the complexity, if you take the mess hall as an example the table - there’s things on the table. There might be a glass. There might be a plate. You need to eat while you are … so you’re using when ... while using the useable you’re actually using another thing on top of it. So there’s this stacking behaviour that grew this feature beyond the original.
If you look at the hacking laptop it’s just a case of get in there. Hack for a certain amount of time. A player might choose to cancel the hack and say “Okay I’ve interrupted it.” Or the laptop itself says “Well you’ve finished the hack. This usable interaction is no longer available to you. You need to stop using it.” Which means you would play the exit. So it’s a simple entry animation. Leads to a looping idle where you’re doing something because of the interaction you chosen and then you can either choose to exit it because you don’t want to do it any more and you want to cancel the interaction, or the usable itself says “Well, you can no longer do this.” So now you need to play the exit you had. Entry, idle, exit.
But then when you come to AI they might want to do lots of things at the … when they are using the usable. Because of the stacking behaviour. The same could be true for something as simple as the laptop: you might go on to it, you might go like “Okay I go up to my laptop" - I’ve said I want to hack this - I’m going to hack it.” But then I’m going to go like “Okay actually I don’t want to hack it. I just want to destroy it ‘cause my hack failed.” So I choose the “destroy” option and I happen to bring a glass of water with me and just pour it over the laptop. Off I go and it’s destroyed and now no longer allowed to use it. There’s nothing you can use here. It’s a destroyed object.
I think the code to destroy it would be … in the Item 2.0 world that would be another component. This is a destructible object. Something describes it and it’s up to the usable to link an interaction assess “I want to destroy it”, play the suitable animation and in some point in that animation trigger the destruction behaviour. The idea is this usable component is the glue between what you can do and what happens when you do them.
How do we make the usables feel more real and natural? A lot of that comes down to having variation in the animations. So to begin with we had a cumbersome … if you look at the entry alone, we had a setup that was like you would say “Right, here’s a usable.” We have five animations. So for each animation we say “This animation stops here, this animation stops here, this animation stop here.” And so on. And then in data you would have to say “We set up this point here. We like that to this animation.” And then the AI will choose the one that’s most natural based on where it’s moving from and which direction it’s coming in from. And try to get a really nice natural motion into the usable.
But of course we realised quite quickly that wouldn’t really scale ‘cause now you’re going to have to … if you want to create another one that’s another data point.. And then you’re going to have to do that for every single usable. That’s a lot of data you have to set up.
GA: We basically approached the problem from two angles, which is we need this quality of animation, this fidelity and that this basically … we need it to look good and feel alive. That’s one thing, and we’re getting closer and closer to that [Grunt]… getting it just right, but the other thing is we’re going to have to build like thousands of usables to populate all the outposts, all the landing zones, some insides … inside ships, on planets and such. Inside all of the levels of Squadron 42 we can’t afford for it to be too heavy on production. We need to optimize the process. We need it to be actually fast, so we’re actually trying to get all of the good stuff without any of the bad stuff. Which is, we needed it to look good, but needed to go fast; we needed to be smart, but we needed to not take too much time or too much people and be too expensive on production, and we needed to be able to basically ... if someone requests a reusable … if he get’s it the day after then like my job is done.
JL: So what we’re doing instead is we’re just saying like okay give us all the entry animations, and then give us like a big group of animations, and then just extract the movement from those animations, and we’ll build up the entry points from the validated points, because you don’t want to be able to end a useable by going through another useable for another person or if it’s outside the navigation mesh then AI wouldn’t be able to find a path to it. So we can remove all these dynamic points … re-add them however you want really, and if someone was to add a new entry point or different style of entry they just put it in there. It builds the data dynamically, and the more variation we have there … I mean already you’re going to get a much more natural style. Like you might have like a way to sprint into a useable or just walk into a useable. So well as like getting to it from like the other side, whichever … whichever feel more natural and gives you that variation as well, because each animation will have it’s own little piece of unique style. So, you can have the same useable. You can be looking at five, six AIs using it, and every time they’re playing a slightly different animation getting in there, and then once you’re in the useable itself having variations in how you look while you’re doing that is not a great way to give it more life. So, making sure that there’s not making just a single style of sitting, but lots of different styles, and vary ... give it variations as … if they’re staying on a seat for a long time make sure they constantly shifting around, fidgeting a bit, settling into a new post. We’re hoping in a way that, that will give it a more natural look, and other big things are making sure that while if you’re looking specifically at the seat that’s useable … making sure that the upper body is still available, and they can do things like they can glance around. You know if the player comes into view they can maybe look at him or just acknowledge someone else walking past them even though they’re busy playing what should be a useable interaction.
GA: One of the things that the useable has been true is this process of actually failing ones, which I’m smiling about, but was actually painful. The good thing though is that it just highlights how iterative the process is especially when we’re trying to push the envelope to do something new and something fresh, something that has never been done before. Where doing that we have to try things, and we have to sometimes take things out that we thought worked and didn’t, and take new things in that we thought weren’t necessary and actually were. And so, what really has been good with the useable is that following this first failure we adopted a really more fluid approach where we try something. We review it. We put it to the best of … put it through it’s pace a little bit, then if it doesn’t work we change to something else, and we just try and cover every problem that is thrown our way not one-by-one, but one system trying to make it evolve to cover everything rather than try to extend it over and over until it crumbles.
JL: From looking up at the sprint teams have a sheet with like the biggest break for like I think the biggest one really for me personally was when we were able to get rid of the alignment slot. So, not having to say, “This is an entry point. This is the animation that plays.”, but being able to just say, “Here is a group of animations that will let you get into this useable.” You can dynamically build all the data you need in order to select which one’s best for your current situation, and likewise doing the same on the way out. Just say, “I’m going to go … I’m going to exit this seat.”, or “I’m going to stop hacking now, and I’m going to go left.” You know, just making sure like a small thing like that … just making sure that exit is already taking them in the direction that he’s now going to move. It’s just such a massive advantage to how you scale this, and give this sort of fidelity that it needs, but then other things as you know that we use … we try and use IK where possible to allow someone to be slightly out of alignment, so you don’t have to sit perfectly on a seat. You don’t have to get perfect on to the hacking console, but it’s constantly allowing a little bit of fix-up in the extremities. So, if your hand needs to reach a cup make sure that we’re not forcing you to be perfectly aligned with that cup. That, that cup can be off by a bit. It can be rotated slightly differently, and we can use a bit of IK to get the correction in the hand so that it makes the reach feel natural and look natural, and just try to reduce snappiness wherever possible, and we use the same … when positioning as well if we need something to be quite exact. We always try to average the correction out over a longer period of time, so that you don’t see that kind of … if it’s five centimeters out you don’t want one frame to say, “now fixing by five centimeters”, because it will always … you know 60 frames per second of five second … five centimeter leap in one frame will be noticeable. So, if you can kind of smooth that out. Try to make it as data driven as possible. Give the designers the ability to say, “do the positionals moving over this amount of time.” Same with the IK, right. When do we start the IK
We’ve got two types of IK that we try to use. We’ve got animation driven IK which is in the control of the animator. The animator says, “this … you can use this okay.” First of all they can say whether you can use it or not, and then they can also plan the in and out over time; use dialog for weapons. Like if you want to reload a weapon your left hand has to come off of the weapon, but you don’t want to snap it onto the magazine. You want to gradually come down, but you might not know where it is, because you might hold a weapon differently, because you put … installed some sort of custom attachment on it that wasn’t there in the original animations. We’ve had already offset your hand to grip it correctly, and then the other type of IK with more of a sort of design driven IK where it can kind of go like right at this point in the animation I want them to start reaching for this point, this cup, this part of his face, you know finish salute, fix-up a little bit … where the hand goes, and that’s done more as a layer on top of animation. So, you have the animation, and then you have the data from the designer saying, “how are we going to blend this in, how are we going to blend it back out, and what hand is moving, and where is it moving to.”
GA: What is actually interesting with the useable theme in the more human kind of aspect is we have ... I’m a member here in Frankfurt. We have Jaimie over in the UK and the Jen’s team is actually lending hands a lot in the UK. Then Curtis is working on props and geometry, and is often lending his hand from the UK also, but we’re also working with Bryan Brewer over in Austin. So, in the US.
Bryan Brewer (BB): Today we happen to be doing a pickup shoot for our useable system. We’re going to be doing things like opening the door. We just finished our door metrics and figuring out what we need for that. Toilets ... we have finally settled on toilet metrics. So, we have captured a couple little moves that we need to flush out our toilet system. [Laughs] I couldn’t … I couldn’t help it. We do little impromptu shoots like this in order to capture the little bits and pieces that we’ve been redoing … realizing that we need with research and development. We have something that we’ve been capturing called transitions. We are … they’re little animations that help us transition from one animation to the next. What this allows us to do is anytime we have a big shoot at Imaginarium Studios or if we set up our full system we have these little transitions that will help us get into and out of those in particular motions.
JL: Some of the usables are quite complex in that if we want to keep interaction sequence. So, we might want to say like “take the tray” for example. When I say … say … when you go up to use the tray dispenser you actually do the full sequence. You … So you go up, you get the tray, you get a cup, you get a plate, you get a knife and fork, and that’s one big animation, but during that one big animation he’s actually going to have to like become an owner of a cup, become an owner of a plate, become an owner of a fork, become an owner of a knife, then release ownership of all of those, and they might be like hey as well, because if he didn’t step all the way up or he came up in a slight angle. So, every time he’s touching one of those items we might want to do a small caved animation driven or procedural fix-up to reach the item. It’s where you draw the line right, but the whole idea with the usables is you shouldn’t have to. They should be so scalable that unless we reach a point where we go, “Right, we’re taking it too far. Now this is going to take too long to support.Let’s scale it back, and let’s simplify this.”, but as long as you can describe it in data for the code it hopefully shouldn’t be that much different.
CR: As you can see usables will be a central part of the universe going forward, that’s why the team has put so much work into making the system as robust as possible and we’ve still got a lot more work and functionality to add because being able to interact with a wide variety of items and locations will only add to the realism that we’re trying to achieve with Star Citizen and Squadron 42.
SG: That’s all for today’s episode, for more information on the development process take a look at our production schedule which will be updated tomorrow.
CR: Yes, thanks to all of our subscribers for your support, we could only produce Around the Verse, Bugsmashers and all our other great shows because of your guys support, so thank you very much.
SG: Yes, and to all of our backers your interest and enthusiasm to see a game done differently is what has made Star Citizen possible.
CR: Yes it really has and thank you guys very much as well. Until next week, we’ll see you...
CR/SG: Around the Verse!