Blackboard

V2.5 Update Details from Discord

Original message here

Hello @everyone! :kermie_wave:

Apologies for the lack of activity, a lot happened really fast in the Epic Games and OpenAI world. I’ve been working hard to adapt to the switch to the new FAB marketplace (which has been very seamless so far, thank you Epic) as well as learning about how the Assistants API works and several other things which I’ll cover.

 **FAB marketplace/Price change**
I suppose I’ll write about the not as fun one first;
I’ve received quite a few comments about AIITK being “too cheap” or people generously requesting to donate to help support development because they feel like they got a bargain. A year ago I would have disagreed, but as I take a step back and look at everything that it can do now and all that I’ve learned, a major part of me wants to drop everything in order to work on AIITK full time. I would be able to release updates more frequently, keep the documentation updated, make YouTube tutorials and other fun stuff.

In hopes of that being a reality sooner rather than later I’ve decided that *increasing the price to $59.99 whenever FAB becomes live* would probably help towards that goal. I don’t plan on changing the price for about a week or so after it’s available on FAB to give people a chance to grab it for the lower price and to allow me some time to focus on last minute bug fixes before pushing V2.5 but, hopefully you can understand.

**Assistants API**
As I mentioned somewhere, the Assistants API is very heavy, as in, there are a TON of parameters that you can control. You also are not interfacing with just one API but multiple to get a stateful experience. Regardless, because of this modular setup server-side, it’s extremely powerful because you can configure and manage multiple assistants that can, for instance; conversate with you while it looks up some info for you in the background, consult another assistant to get a better answer, etc..

In V2.5 you will have nodes available for sending and receiving data from:
> - Assistants (For managing/configuring assistants)
> - Messages (For managing conversations)
> - Runs (For managing what the assistant processes)
> - Threads (For saving conversations)

This API and subsequently the example for it is going to be a bit more involved to get configured properly, and learning the caveats of the expected message sequences is also a challenge (I’m still experimenting with it myself). It’s still pretty experimental but I’ve set up a basic example for this in the content folder which gets me to the next thing;

**Metahuman/Lip Sync support**
*Metahumans are now supported* along with an expandable, lightweight lip sync solution that finds visemes (face shapes) from timing data for surprisingly effective results.

*Notes on this:*
There are two main methods that I’ve seen developers use to generate real time face animations; Processing audio data directly, or reading generated timestamps. The method included in V2.5 *reads timestamps from ElevenLabs* (OpenAI doesn’t do letter-level granularity yet) and picks the best phoneme based on that data, pretty straightforward and its simplicity lends to potential performance gains. We could use Unreal’s Neural Network Engine to do this for us and it seems in UE5.5 it might be working in real-time natively, regardless, options are great so I’ll be refining the timestamp method for the purpose of performance and looking into an NNE solution for the purpose of quality. I know this still isn’t an all-in-one solution yet but hopefully this helps stop the endless stream of greedy alternatives.
Bonus: Did you know that you can *add metadata to animation assets*? I’ve exposed a node that allows you to actually GET this data to use anywhere you want. AIITK uses it to store the blendspace coordinates associated with the viseme… I’m probably the only one who thinks this is cool but it is a nice segue into the next topic lol.

*Part 2/3* @everyone

**Tools/Helpers**
In V2.5 you’ll start seeing more ways to manage data and requests within the editor/blueprints as we move closer to V3.0. This is the start of a shift to a fully modular framework that can adapt to changes to APIs by allowing you to create your own structs that convert to json to then send via http or websocket.

Some highlights:

> **JsonHttp Toolkit **- A toolkit that helps with conversion like;
> Json->Struct
> Struct->Json
> Json->UObject
> UObject->Json
> Json->Http request
> Getting a field directly from a json string
> And more

`Why Json to/from UObject? Use an actor like a struct and just send all the actor’s data without having to manually manage dynamic variables`

> **WetSocket Manager** - Websockets are back! You can connect, close, and send a message via websockets with events for handling data from the server. You can use this to connect to any websocket you want.||Don’t ask why I named it WET SOCKet, I needed a unique name and my programming is like a wet sock so… there you go. ||

> **AIITKUtility Widget** - A helper widget meant to be used in-editor to*** streamline json to blueprint-editable structs***. Additionally, it contains an ***asset analyzer*** to analyze and even edit variables typically hidden from the editor view, and to view what your asset’s data looks like if it was converted to json format. Personally, this has been an INCREDIBLE tool for quickly setting up parameters for communication with any API by just copying and pasting json examples from documentation into the tool. It will even create sub-structs whenever it encounters a json object, this is essentially what I was doing manually in C++ but *now you can configure it however you want in blueprints* without fully knowing json format and how they relate to struct parameters.

> **CallFunctionByName Node** - I really don’t know why this isn’t included by default in-engine, the timer node allows you to execute a function by it’s name after a delay, this node does exactly that but without a delay allowing you to avoid gigantic switch statements.

> **Audio Processing** - I’ve made some updates to the audio conversion and processing features that are included with the plugin, with an emphasis on raw audio byte manipulation. This was helpful for serializing a stream of the user’s voice to the API. You can now;
> - Play an audio component that broadcasts the elapsed time (for procedural sound waves)
> - Encode raw bytes audio data to Base64
> - Decode Base64 audio data to raw bytes
> - Load a Wav file as raw bytes
> - Resample raw bytes audio data

*"Why do I need to serialize a stream of the user’s voice to the API...?"*



 **OpenAI Realtime API**
 Surprise! Good timing for the websocket revamp, V2.5 will feature an example to connect to the *Realtime API* as well as the *11Labs websocket API* via the new WetSocket Manager. I’ve provided a RealtimeAPIComponent that handles most of the events for you and is expandable so you can add your own logic to the 28 server events you can listen to. You can literally stick this on any actor and start talking to it. It scares me, I really can’t wait for you to try it.

**Android Support**
 I planned on this version of AIITK being fully supported on Android. I added a lot of features in a short time frame since I last tested for Android and I *unfortunately don’t have enough time to check if all of the new features work seamlessly on mobile* so I don’t want to promise something that isn’t true as re-checking compatibility would push back this update another week at best. This will be the primary focus of the next update though as well as QoL improvements and bug fixes. Feel free to compile to mobile yourself and report what works and what doesn’t.

**You sure do talk a lot, when can I try this stuff out?**
*Monday*, unless FAB still isn't open to the public yet. I'm still doing some last minute fixes and will be reaching out to individuals who have shown interest in helping me test soon.

*I think that's about it! Additionally, I feel I need to say that I’m forever grateful to everyone who purchased the plugin so far, your enthusiasm and positivity has been an incredible source of drive for me to cram as much stuff in here as possible and your reviews and comments make me want to cry.*

**Genuinely, thank you.** :heart:

*-Kaleb*