Webinar Replay – Audio Processing for A Better Home Movie Experience
Cindy Zuelsdorf:
Let’s get started. Welcome, everybody. I’m so glad you’re here today. And hey, M.C. How’s your day going?
M.C:
My day is good, busy and busy, but no, very good.
Cindy Zuelsdorf:
Nice. I’m Cindy Zuelsdorf here with M.C and thank you all for being here for Audio Processing for a Better Home Movie Experience, and M.C and I were chatting about this a few weeks back and he had some great things to talk about. And so we’re going to get into why the theatrical mix isn’t suitable for a home viewing environment, and how cinema and home mix is different from one another, and what you can do to achieve an optimal mix without investing a great deal of time and money, and how other people are doing it right now, out in the field. So over to you, M.C.
M.C:
Okay. Thank you, Cindy. So what we’re going to do today is first of all, just a little bit of background. Because cinemas aren’t open, it is frustrating the release of movies in a timely manner. I’ve been dying to see the Bond movie, and it’s not going to be released. It got delayed because of COVID in the cinemas and so on. And theatrical release is a massive source of revenue for the studios. And so there is some talk about saying, we need to release this onto online platforms.
M.C:
We have been doing some work because there are people who are delivering content as broadcasters and as online platforms, who have theatrical mixes to contend with. And so I thought it’d be a good idea to have a chat about it and have a webinar explaining the issues and so on. I’m going to keep this largely non-technical. For once I’m going to use a short PowerPoint, but I will spare it down to two or three minutes, and I will try and show a clip of the processing before and after.
M.C:
So what I’m going to do is I’m going to start by talking about audio mixing for the cinema, and why is it different or what are the constraints, etc. So you’ve got to remember, a cinema is a calibrated environment. If you want to present content to a cinema, the cinema sometimes get certified by Dolby or by DTS. And what they do is they look at the acoustics of the cinema, you don’t want echoes inside. The speaker layout, the quality of the speaker and so on. So what it is, is that you create a cinema that is suitable for theatrical mix delivery, and then the person who’s mixing the audio knows that that’s going to be exhibited or presented in a controlled environment. So he has a lot of freedom when it comes to doing the audio mixing, to create a good experience for you. And obviously, if you’re watching action movies, the good experience is guns and planes and noises. And also, if you’re watching a period drama, then there may be moods and music and dialogue and so on.
M.C:
And the cinema, as I said, acoustically, it’s really well-prepared for this sort of delivery. The second thing is the attention span. We go there for a 90 minute experience or a two hour experience if you’re Bollywood, a three hour experience, but with a three hour experience, they give you a 10 minute interval. So what it means is your eyes and ears and everything can adjust and can deal with it. Now, what this also means is that because the creative audio guys and the directors want to create a certain impact, there are some standards for delivery, but by and large, it’s a bit of a free for all as to how you do it.
M.C:
My favorite story is Christopher Nolan, who, in one of his movies, they put up a billboard outside the theater which says, you may not hear the dialogue and that’s intentional, which wouldn’t be so good if you’re doing Hamlet or something. But anyway, that’s a creative license that exists. These guys are billion dollar box office, movie makers so they get what they want generally.
M.C:
Now, when you come into a broadcast environment, first of all, if you look at your living room, it is probably the epitome of a non-acoustic environment. The Telly’s shoved in a corner somewhere. The speakers are… In the old days, we used to have a nice three inch speaker, but now with the sets getting thinner, the speaker quality is variable, and there is a lot of noise, and we watch TV for a long time. And I’m probably speaking as an old guy who watches TV, as opposed to the kids who watch it on the laptop. So the audio channeling is different. And what the broadcasters have had for a long time is some standards who say, “When we deliver audio, we want to have a certain audio level or approach that is suitable for that environment.” And we call it program loudness.
M.C:
In program loudness, what you’re really saying is that the average level of audio has to meet a certain amount. So you can have loud bits, you can have quiet bits, but in a program, the average must always be consistent. Again, we won’t go into too much detail. I’m talking about these concepts because later on, I’m going to talk about what the challenge is as you go from one environment to other. The other key thing with program loudness is there are broadly speaking, two standards, the EBU, the Europeans, and the ATSE 85, the Americans.
M.C:
And then there are variations of that adopted by all other countries, but there are two numbers. The average for the EBU level’s minus 23, the average for the US is minus 24. Then there are true peak levels that really are designed so that you don’t drive your peaks into distortion, and they could be minus one, minus two, minus three. The good news or the bad news is even with these small numbers, every individual broadcaster has their own flavor of this. So when you’re delivering content, it’s important to meet these standards and requirements. And what this means is you’re going to change whatever mix that has been provided, either it’s a theatrical mixer and episodic mix and change it to meet a broadcast spec. Now, it’s a technical requirement, not a creative requirement. However, you need to make sure that when you try and meet the technical environment, you keep the creative intent as true as possible.
M.C:
So that’s your broadcast environment. If we now talk about the online environment, the online environment has fewer rules, fewer standards, and they’re kind of ad hoc. Netflix, if you can call them an online platform, do have a very specific requirement. They went away and thought about it and they came up with a requirement which isn’t the broadcast requirement. It’s a standard of their own. Again, I won’t go into details other than say it’s different and why that matters, we’ll follow in when we start processing. So Apple have a guideline, Amazon have a guideline. Everyone has a slightly different guideline, but the general rule of thumb is that the environment that you’re watching some of those things is very different to the living room. You most likely are listening to it on your headphones. Very likely, this is if you’re outside, very likely you also in a noisy environment, on a train, on a plane, so there’s consumption there.
M.C:
Now, obviously there’s also consumption in the house. So there is… I don’t watch serious content on a… Well, I do watch it on a plane, but then it’s the mix that they’ve done on the plane itself. But I wouldn’t think about watching it on a train or anything, but a lot of people do. I watch a lot of that content in the house, on my TV. So it’s important that the audio mix for that is of high quality so that it’s as good an experience as a satellite or terrestrial broadcaster, if not better. So here we are, we’ve got these three scenarios, the cinema, the broadcast and the online delivery. So I spoke about program loudness as the average.
M.C:
There is one other parameter for this particular discussion that’s really important. And that’s the thing we call loudness range. Now, loudness range is a way to describe the dynamic content, how dynamic is the content in a movie or in a program. And it isn’t the loudest bit or the quietest bit, because if you did that, everything would have high dynamics. There’s always one peak you could find that’s very loud, and there’s always silence in programs. When you measure program loudness, you integrate 40 millisecond blocks of audio, and then you treat those integrated numbers in different ways to get different values. Again, I’m trying to keep this simple.
M.C:
If anybody’s interested in a very technical explanation of this, we will be inviting people who want this. We have a PowerPoint that we can send, which has got a lot of detail, which we’re happy to share with you. We also like to work with our customers by interacting with them in terms of understanding their needs, and then offering some of our expertise and experiences. So if you do want to do that, we’d encourage it. We’ll also allow you to download our licenses and actually try it out on your content with your people, making the subjective quality assessment. Now, this is important because of the creative intent.
M.C:
Loudness range. So let me try and give it a hand-waving explanation of why it’s important and what you have to do. So the first thing with the loudness range, as I said, it’s an attempt to understand the dynamics. And the way we do that, and I will put up a slide a bit later and go into detail then, but the way we do that is, as I said, we have these integrated blocks. And as those blocks appear along the time, we note them, and then we do a statistical model of how often does a particular level of value come across. And then we draw a histogram of it, and then we chop off a percentage of it on the quiet side and a percentage of it on the loud side. And the middle block is what call loudness range.M.C:
Now, this is a good measure of saying how dynamic is the movie or the content. Now, because we’re talking about movies and because I said there isn’t a regulation, broadcasters like to have a range. They like to say the loudness range shall not exceed 16, or it’s between 16 or 18 or something along those lines. The online, again, they don’t have a rule, but our experience from working with some of our customers has suggested a rule that works well for online delivery. And again, I will tell a couple of stories about how we work with customers to understand this process. Loudness range is really important for two reasons. The first one obviously, is if you have extremely large loudness range, the living room experience becomes difficult.M.C:
A couple of examples, I was watching The Accountant on Netflix a while back and Netflix for some time had wanted to present the theatrical mix. And the house came down and started yelling at me because when the guns started firing, it was extremely loud. And in that environment, if you then say, because it’s so loud, I’ll lower the volume, your dialogue disappears because the dialogue is suppressed. So you need to keep that loudness range in such a manner and process the audio in such a manner that actually the dialogue is kept at an intelligible level. And again, I will mention a couple of strategies for how one goes about doing it.M.C:
So we talked about program loudness and loudness rank. Program loudness, because it’s just the average, you could do a very good correction of program loudness by simply changing the global gain in your content. If your program loudness is minus 20 and you need to deliver to minus 23, apply three DBs of attenuation and you meet the spec. So it’s very straightforward. Now, when you go through extremes, again, it gets challenging, but by and large, that’s the formula. For loudness range, you have some issues. And the issues are to do with the fact that if you have a dynamic range reduction strategy, a simple way to do that is to simply put it through a nonlinear transform. So what you say is, “I’m going to have an S curve on the thing. So I’m going to suppress the quiet bits, compress them. I’m going to compress the loud bits and I’ll keep the middle bits linear. And that would be my compression.” Lots of compressors do this. It’s a simplistic way.M.C:
The challenge you have is where you have a thresholding effect where the noise is going in and of a particular threshold. And so you get what they call sound pumping. It’s not very nice, and if you’re watching a nice movie that’s been well-mixed, it’s not a good experience. So you want to avoid those sort of artifacts. There are various strategies for avoiding it, but when we started doing this, we started looking at a process, and before an automated process was available, what happened is every station used to have a sound mixer. So when they had the movies come in, they would go to the audio department, they would listen to the movie and they would remix it so that it gave you a nice loudness range and everything. Human beings are still the best operators to do this.M.C:
However, in the modern day if you didn’t have an in-house service, a lot of the stations started outsourcing this to specialists. So what would happen is a lot of our customers basically take the content from the studio and create different versions for distributions, for different clients. Why? I just said it when I spoke about the various delivery requirements. So although we have minus 23 and minus 24, there are little variants around this. So every customer needs a slightly unique mix. It’s very, very small, but if you’re trying to do this manually it would be very, very expensive. So program loudness, you may get away with. Loudness range needs a lot of attention. It needs attention to the whole movie, and so it becomes a very, very expensive process.M.C:
So we got a call from a major Hollywood studio when we just started the company about seven, eight years ago, and they said, “Oh, you guys are doing these automated loudness. We want to talk to you about this theatrical experience and the problem we have.” And so we listened to them and they outsourced all this. So they weren’t even going to buy, they were simply advising us that this is a problem. So we listened to them. We analyzed the problems. We came up with a bunch of ideas, and we came up with a processor that saw the problem. And it was a reasonably good processor.M.C:
As the time went on and audio… Actually, over the last few years, audio’s become more important to broadcasters in terms of they’re paying more attention to this. It isn’t good enough just to meet the spec because people are saying the content has been modified creatively, and it’s not a good experience. So we started talking to people and Canal+ took an interest in us. They looked at our processor and a few weeks later, we got a phone call from them saying, “Hey, M.C, we want to show you something.” And they said, “This is your processor and here’s another processor.” And the other processor sounded really, really good. And we had a long chat and we ended up licensing the process. Two reasons for it, our businesses in automating it and solving a range of audio problems. And if we can get the best solution for our customers, if we can guarantee that their ability to automate is 100% achievable, then it’s worth our while doing the licensing.M.C:
So we’ve licensed this process. It works really well. It was particularly nice that a discerning customer came to us, actually pointed out the issues. A couple of years later, we had a similar issue with ZTV in India, where they came to us with the same idea that, “We are listening to this stuff. We want to really put your product through the ringer to make sure it’s suitable for our needs.” We had a bunch of iterative conversations and now all their movies and all their content goes through our processor. And the reason why I mentioned this really is this interaction between customers and us is very important. We encourage it. It’s how we get the best results. Every customer has a slightly different focus and a slightly different need. And the more discerning they are, the more complex their requirement is. But having that dialogue allows us to provide for their needs. And because this is really for the whole movie and everything we’re focusing on that, but there are other audio issues that we talk about. We do more than just loudness.
M.C:
So anyway, loudness range is this iterative process in what Anton Hurtado, who did the algorithm development did, is he did an analysis of the audio, and then he came up with a set of metadata that allowed us to create any loudness range we want, without some of these artifacts. He’s a big audio buff. He’s also very mathematically orientated and he’s come up with this algorithm. So now, with that one analysis, we can do a broadcast delivery or an online delivery. And it’s an iterative process, it’s a complex process, but the end result is, surprisingly the dynamics are preserved, but the loudness range is reduced.
M.C:
If we add to this process, the Dolby’s dialogue intelligence, where we measure the dialogue as we do this, and we incorporate that process in it as well, we get the best of all worlds. So we get a mix that complies to a standard. We get a mix that the dialogue is intelligible, and we get good dynamics. So really, that’s the secret sauce that we have for broadcasters, and then we just apply that same secret sauce with different set of numbers to get you the online mix. So I’ve been rambling on for quite some time. So I’m going to very quickly just do the PowerPoint bit and play the clip, and then I’m going to come back and say how hard or how easy it is to do this.
M.C:
So I thought, I’d start with this little slide. So these are these integrated measurements for The Matrix, the movie. This is a slide I got from the EBU Tech paper. And this is the two thresholds for the loudest bit and the quietest bit. And you can see that The Matrix has a loudness range of 25. Now, best way to show all this is to walk you through some of the other slides. So here’s the loudness range. At the bottom there’s a list of movies. So Hamlet is at this end. So you can see that the loudness range for broadcast is this one and the loudness rain for online delivery is this one, and you can see the divergence. There are movies that go all the way to 25 and there are movies that go all the way to five.
M.C:
And so if the loudness range is actually, if you’re looking for 15 or 16 and it’s actually six, there’s no need to take it the other way. It needs to be less than, rather than exactly within that range. But it gives you an idea. If we now look at the program loudness, you can see that the program loudness also varies. So here’s some of the special like Golden Eye, Zulu, Man of Steel, the program loudness of minus 12 or 13. And at the other end, you got Hamlet and Sabrina. So this is a bunch of my DVDs that we did some analysis on. And what we’re trying to do is we’re trying to get that target to minus 23 here and for online presentation, minus 16 or 18.
M.C:
So you’re trying to make both these challenges. And if we put them side by side, you can see that there’s a diverse need of processing that needs to be done. This is a short clip and watch out towards the end where you’ve got the dialogue.
Speaker 3:
Are you hurt?
Speaker 5:
What?
Speaker 3:
Are you hurt? Are you bleeding?
Speaker 5:
I don’t think so. Are they following us?
Speaker 3:
No. Just calm down. We’re going to be fine.
Speaker 5:
I’m not going to be fine. They just to kill-
M.C:
Okay. If you are listening carefully, when we switched between the original and the processed in the dialogue, the dialogue was lifted. That was an automated process that did that. It’s a short clip, but obviously we do it to the whole movie. Now, clearly this is an attempt at a demonstration. What we normally do when people are interested in this is say, download the engine, try it out, work with us. If you find a problem, generally, people are very happy to FTP the file to us. We’ll analyze it and we’ll either say, “It’s a problem with a setting. Here, try this, or we will make a change if required.”
M.C:
We’ve done this through several iterations and an awful lot of what I call the golden ears have listened to us in this. And so we think we have a good process now, but there’s always room for improvement, and we encourage people to download it. So I said the next thing I was going to show is if we were to do this, how would you set up the engine or how would you do this? So this is the analysis screen. So we normally analyze the file and then we process it. So we analyze. It in this case, this is set up for the US operation ATSE 85. We’re going to analyze for program loudness, loudness range of 15 and true peak of minus two. So what will happen is Engine will analyze the file, measure it, and then it will come up with a criteria to correct to.
M.C:
So if we now look at the next slide, here we have, what are we going to correct? This is a correction profile. So we have a couple of little tricks. If you get tone in your file, which for God knows why we still deliver files with tone in them, we will detect it and ignore it. But here it says, correct for program loudness, correct for true peak, correct for loudness range. It will now say if the program loudness doesn’t meet minus 23, then it will correct it, or minus 24 as we had it, it’ll correct it to that. It will correct the true peak so that they don’t exceed minus two. And it will make sure that the loudness range doesn’t go above the specified threshold of 15.
M.C:
Now, there is one thing that I forgot to talk about, so I’m going to go back again, dialogue intelligence. So what it’s saying is if you click this box, when it analyzes the file it will look for dialogue, measure it, and then apply that additional correction in the process. If you set this up, you could create a watch folder for this, and you can put 100 movies into it and Engine will quietly go in and correct them. So you’d set one of these up for a broadcast delivery. You’d set another one for an online delivery, and it will go in and crank out the results. I can talk about this forever, but this might be a good time to say, do we have any questions?
Cindy Zuelsdorf:
We do have a couple of questions that came in. If you have questions, go ahead and put them in the chat. To start with, somebody asked, can you talk more about the difference between program loudness and loudness range?
M.C:
Oh, okay. So I will talk about three things. Traditionally, what we used to do is we used to measure the peak level of the audio. And that was an old measurement, PPMs we called them. And they worked really well, they were designed to solve a technical problem, and that was that if you had too much audio, it interfered with the color information when we did the transmission. So you needed to limit the amount of audio. And then in the eighties, the commercials people discovered that although you’re not allowed to exceed it, there’s nothing to stop us from staying very close to it. And so they did, and that’s how we got very loud commercials.
M.C:
And as a result, one of the things that you have when you stay close to it is you can’t preserve the dynamics of the audio, or you don’t, you lose them because you just created what they call a sausage, everything is constant level. And actually, if you go back to some of the older ads in the seventies and things, they used to have a lot of classical music in them, and they had dynamics, but the impact, buy my car tomorrow at 10,99 or whatever, it dominated, and it was extremely irritating.
M.C:
The reason for I’m harping on about this is that a few years ago, the broadcast bodies got together and said, “We need to find a way to change this. And so what we’re really trying to do is have that experience where we encourage people to find the dynamics again.” And so we said, “How do we do that? How do we allow people to have dynamics and yet maintain a standard?” So they came up with this idea of the average loudness, or average audio level in a program. It’s a little bit clever and more complicated than just an average, because they have a thing called gating. Our PowerPoint in detail, which we’ll send to you, we’ll explain this, but that’s the average level. So the average level, if you think of a commercial, the average level could have no dynamics, but it could still have an average. But if you have dynamics, you can have average and you can have dynamics. So loudness range is really saying, in some ways you could say it’s a measure of how much the program content deviates from the average, whereas the average is the average.
Cindy Zuelsdorf:
Another question to follow on that is, is the emotion dialogue intelligence similar to Dolby’s?
M.C:
It is. It is exactly the Dolby’s. We have licensed that technology from Dolby. And so yes, it is Dolby’s dialogue intelligence. Now, there are a couple of reasons why we did it. One is it does help with the dialogue. The other one is a famous broadcaster in LA said to me, “Don’t knock on my doors until you can have dialogue intelligence in your product.” And we always listen to our customers. It is a choice that you have. So in Europe, people did not like the notion of dialogue intelligence, which is why it’s not included in any of the broadcast specs, but we give you the option to use them. So as a product maker, although we follow the specs and so on, we also follow the flavors that our customers like.
Cindy Zuelsdorf:
We do have a question here asking why is the audio online different to audio for broadcast? And could you explain a little bit more about those differences?
M.C:
There’s a group in Europe called P Loud that discuss this at great length, but the simple explanation is that the broadcast environment from a listening point of view is a little bit more controlled. As I said, it’s your living room where the ambient noise around it, there is ambient noise, but it’s not always persistent, and it’s not very high. So you’re not competing to hear the audio content. The online requirement, as I said, if you’re in a noisy environment like trains and planes, then the audio level needs to be higher. Now, there are people who argue that it shouldn’t be the case. You should be able to get away with the broadcast one. And if you can’t hear it, just crank up the volume on your phone or your iPad or whatever device you’re listening to.
M.C:
However, the platforms have argued that they want a higher level of audio. So again, there is no reason why the broadcast mix wouldn’t work, other than only your phones in on your iPads, I think there are limiters on how loud they’ll let you go, because you’ve got headphones on, so you don’t go deaf or you don’t damage yourself. So there are some limiters there. But a little experiment, or I observed really is that I was switching between my Regulus television and Amazon Prime on my TV and the Amazon Prime is louder. And that’s how it’s set up. Now, again, in your home environment, if you get it louder, you’ll have to turn it down, but you can still maintain quite a lot of the dynamics, even though you’ve lifted that level. Obviously your peak level to average level will be reduced, but I think that you will get a good experience.
Cindy Zuelsdorf:
That’s great. Now I’ve got two questions that in my mind go together, so I’m going to throw them in there together. And one of them is, how do I find out how good the processing really is? And somebody else asked, can you tell me about the pricing of the software?
M.C:
Okay. So the only way you find out how good it is, is to try it. It is a very subjective thing. Now, we like to think that although it’s subjective, the processing has been carefully tuned, but we just sit there and say, “Don’t take my word for it. Try it.” Now, to give you a couple of examples of this, we have lots of them. We love people trying things out because it’s our way of saying we have nothing to hide. Please listen to it with your material, in your environment, with your ears, because if you can win that, then we have a sale.
M.C:
So the important thing is that test. We have a number of customers who are constantly doing these tests. Our job then is to help and assist to make sure that the test goes well. So we go online, we will do TeamViewer sessions. If something goes wrong, we’ll encourage them to actually send us the file for analysis. I have been sent some amazing content over that. We will sign NDAs. We promise to delete all the content as we get it, et cetera. We respect the copyrights and the nature of it, but it’s the way we work together. So I think the answer is, try it, listen to it. If you don’t find it meets your requirements, talk to us about it. And maybe there are some limits that we can help with. Nine times out of 10, we help with some adjustments.
M.C:
And to give you a measure of our perseverance, and this is no complaint against my customer, ZTV took a good 18 months of evaluation. Now, that wasn’t because they were just taking 18 months, that was because he needed to have the right people at the right time. And their interests grew progressively in what we did, until such a time as the purchase requirement came about, but we were happy to do it. We’re very glad we did it because we got a great result and a very happy customer.
M.C:
Now, in terms of price, price is a… We have a very modular system. Very often people don’t buy just the loudness and the loudest range processor, they buy a bigger system. But it will start at around $10,000 and work upwards. Now, we have customers who are processing 10,000 hours of content a month, and their system runs in five figures and upwards, and we have customers who are sitting there saying, “We just want to do a few files at a time.” So the system scales is really what I’m saying.
M.C:
And the one thing I didn’t mention is I spoke about automation. We have an API for the product, so you can integrate it to your own control system. We have some MAM integrations. We integrate with Telestream Vantage, [Espera 00:36:35], Orchestrator, but we also have another system of what folders, any clients that allow you to automate the process without hooking up to a mainstream MAM.
Cindy Zuelsdorf:
So we have a question. Do you have a lot of presets for reference inside of EBU, et cetera?
M.C:
Oh yeah. So what we do there is that the settings that I showed you, you can set them up and give them a name. So, for example, although, as I said, the standards are slightly different, our first customers were actually big post houses in the UK that were delivering commercials all over the world. And so every time they had to deliver to a broadcaster in Brazil or Spain or Portugal or India or whatever, there would be a broadcast spec that came along. And what we did is we created a system in the thing, which is just called a profile and you give the profile in name. So this one’s for Brazil, this one’s for Fred, this one’s for blogs. And then the operator simply says, “We’re delivering to Fred, so we’ll use a Fred profile.” And that will store the measurement criteria, the correction criteria and also the channel layout, if you require.
M.C:
The system I showed was from our desktop product, but we have a more complicated protocol Engine, which allows you to set up a workflow and set up and store individual profiles. You can have as many of them as you like. We have customers with upwards of 100 profiles
Cindy Zuelsdorf:
One more question. I have an existing media management system. How can your system be integrated with it?
M.C:
Okay. So, as I said, we have a rest API. Now, when you have an existing MAM system, it reads cooperation from both sides. So sometimes, we’ve already done it, but if we haven’t, we would share the API with a MAM provider. We would loan them an engine so that they could do the integration work, and then we’d work with a client. Client pressure helps in these instances because the people won’t speculatively do this, but we have managed to do that. And it really is two or three days of work to do this. It’s not very difficult.
Cindy Zuelsdorf:
Thank you everybody for being here. I’m so glad you joined us for Audio Processing for a Better Home Movie Experience. And it was super helpful, M.C, to hear about loudness range and program loudness and all about the cinema at home and what it means for playout and broadcast and post. All of you watching the replay and all of you here live, again, let us know if you want to get the technical PDF with all of the details, and by all means, sign up for a trial. Thank you, M.C.
M.C:
Thank you very much for giving us your time. And as Cindy says, please feel free to contact us. We have the PowerPoint, we can let you have the download and we’re happy to work with you.