Our Creative Evaluation Engine, Project Tars was named after the AI robot from Interstellar which played a key role in discovering the new dimension. A smart system to analyze the elements of a design, Tars evaluates creatives with the intention to boost performance. In a simple but revolutionary way, it changes the way we think of a design.
The birth of our Creative Evaluation Engine - TARS
Over 200K+ creatives were generated on Rocketium in the last year. All of these have been from some of the highest-performing enterprises in our ecosystem. And we have been a quiet force behind their large-scale projects.
Sensitive information from campaigns is highly protected on the Rocketium platform. However, there is a hoard of data from successful, large-scale businesses who make creatives for different objectives (such as ads, app, push). And if we are as committed to our mission statement, we couldn’t have let this goldmine of data not be used for smarter, better projects.
Thus, TARS came into being. Just like its namesake, Rocketium’s TARS is a strongly tested and trained logic that uses the collective intelligence from the entire platform, looking at each element objectively, to help build high-performing creatives for our users. The engine enriches and unlocks existing data on the platform to make recommendations in the design for upcoming projects. And with every new creative made on the platform, the information is enriched further and it helps every new and old user to make better decisions the next time.
The vision: smart recommendations for better performing designs.
The whole project relied on two modules coming together in perfect harmony:
Tesseract: Building an analytics engine that can store and perform queries to retrieve ‘interesting’ information and insights from a large number of creatives, and then compare this information.
Tars: Building a recommendation engine that can compare new information (from creatives and user actions) with guidelines set by the company.
The final outcome would be the compilation of data from the analytics engine, and then providing it as an input to the recommendation engine for better performance of every design created.
Part 1 - Building Tesseract: The Analytics Engine
At Rocketium, we store the input data in a JSON format, and then it converts to a richer output. Our technical terminology for this codification of creatives would be JSON -> Video/Image/GIF converter. However, building this analytics engine cam with a few challenges:
Problem #1
Using MongoDB on our backend, the deeply nested creative data would be quite difficult to run analytics on. The existing data is quite heavy, and complex aggregated queries would be very slow in returning with a response.
So we had to figure out the best database, or some other analytics engine that can give better performance data.
We will circle back to this issue, but there’s more to look into for right now.
Problem #2
We wanted to slice and dice the data based on many keys, which meant we couldn’t use standard MySQL or MongoDB setups.
So we started with the research, and heavy as it was, we settled on Google’s BigQuery for the following reasons :
It allows for SQL-type data, but does not have the concept of indexes. This works in our favor, resolving the second problem mentioned above.
The performance was better once the data gets into the data flow.
It is quite cost-efficient when compared to other TTL tools such as Stitch - which we loved initially, but decided against for maintainability and control reasons.
Problem #3
If we were going to use two different databases to run this operation, we’d have to figure out a way to keep them synced.
To resolve this, we looked into the support from Mongo. We tackled this issue in two parts.
We needed to address the past data and then take a full dump. Our data was about 30GB or so, and the transform script must run at each level. However, ours was only running on a single portion of that. But we figured this was good enough to be POC, and we set out.
For keeping things in sync during runtime, we leveraged the watch command on Mongo to receive any changes to our code.
const tail = () => {
const taskCollection = db.collection('capsules');
const changeStream = taskCollection.watch({fullDocument: 'updateLookup', readPreference: 'secondary'});
changeStream.on('change', (change) => {
console.log(change.fullDocument);
});
};
So finally, our data arrived into BigQuery in a flattened format. Our single table from Mongo was now four different tables in BigQuery. And the next step would be to form queries and show them via our UI, for actual user insights.
You have exceeded the recommended ratio of the logo to the container. Consider resizing.
The image to container ratio is exceeding the guideline for optimal CTRs in your category. Consider rearranging.
Too many font styles are used. Legibility for your aspect ratio is not optimal.
As a newbie to this analytics team, I needed some time to recall what should happen next. But then I did, from something we all probably learned in XI grade. I identified that we can get histogram quantiles, and then make deductions according to the present value of inputs from a user.
Sample query for the first statement regarding logo:
select
approx_quantiles(total_area, 100) as percentile
from
(
select
sum(
if (
stylesWithArea.area >= 100, 0, stylesWithArea.area
)
) as total_area
from
(
select
*,
(
(
(
(styles.size_height / 100) * (styles.size_width / 100) * (
(
100 - styles.modifyLength_top - styles.modifyLength_bottom
) * (
100 - styles.modifyLength_left - styles.modifyLength_right
)
)
) / (
styles.size_width * styles.size_height
)
) * 100
) as area
from
`new-vidgen-servers.analytics.styles_dev` as styles
where
styles.elementType in ('image')
and LOWER(styles.helpText) like '%logo%'
) as stylesWithArea
left outer join `new-vidgen-servers.analytics.capsules_dev` as capsules on stylesWithArea.capsuleId = capsules.capsuleId
and stylesWithArea.version = capsules.version
where
capsules.outputFormat = 'image'
group by
stylesWithArea.capsuleId,
stylesWithArea.version,
stylesWithArea.size
)
Now we had data in the format of 0 to 100 percentile, and what is the value at each number. We simply looked at the input value from the creative, and deduced the historical data to say something like:
83% of creatives have the logo, more than 2% of your creatives.
The entire analytics was executed at two levels, across the Rocketium data as well as within the customers’ scope. Tactically, we stored the entire query in our database with variables in the query such as {{teamId}} for providing dynamic values.
Part 2 - Building Tars: The Creative Evaluation Engine
I thought of building Tars the way that ESLint rules work in our React app. A user provides a config JSON, and while developing we get errors or warnings if there are any issues. Of course, a designer cannot know the full depth of the brand guidelines, especially if there is a large team with many shareholders. So our intention here was to resolve the issue of reviewing repetitive stuff altogether.
This meant providing suggestions like
The text should be between X and Y % of the banner.
The total number of characters in the banner between x — y.
Steps to building Tars
Having Tesseract in place was definitely a huge advantage, as we decided upon using the same logic to get the flattened structure for Tars as well. With JSON rules engine (https://github.com/CacheControl/json-rules-engine) as the base, we were going to build on top of it.
The modular library offered us a huge deal of liberty in what we were trying to achieve. A single rule would give us a yes or no, but we wanted to have a loop of these rules constantly in motion. Furthermore, we must collect the values on which elements break.
We also needed aggregation in certain places, and I thought of the whole thing as if an SQL query. I had to build logic like grouping, filtering, and collecting data -- here is an example.
{
"_id" : "totalTextArea",
"name" : "Total text area",
"message" : "Text should be between 30 and 50 % of the banner",
"ownerId" : "561796c9ce0203f33fe8e565",
"order" : 1.0,
"rules" : [
{
"type" : "filter",
"key" : "styles",
"outputKey" : "textElements",
"customFact" : "area",
"ruleJson" : {
"all" : [
{
"any" : [
{
"fact" : "elementType",
"operator" : "equal",
"value" : "text"
},
{
"fact" : "elementType",
"operator" : "equal",
"value" : ""
}
]
},
{
"fact" : "area",
"operator" : "greaterThan",
"value" : 0.0
}
]
}
},
{
"type" : "aggregate",
"ruleJson" : {
"any" : [
{
"fact" : "sum_value",
"operator" : "greaterThan",
"value" : 50.0,
"params" : {
"array" : "textElements",
"key" : "area",
"groupBy" : "size",
"groupByVal" : "max"
}
},
{
"fact" : "sum_value",
"operator" : "lessThan",
"value" : 30.0,
"params" : {
"array" : "textElements",
"key" : "area",
"groupBy" : "size",
"groupByVal" : "min"
}
}
]
}
}
],
"isDeleted" : false
}
How does this work?
If you closely go through the JSON above, it reveals the following:
The flattened structure includes capsule, scenes, elements, and styles as the four base structures.
The library compared data by determining greater than or less than values.
Custom facts are used to calculate areas, or min/max values.
Custom facts also come with a bunch of parameters.
We have two methods for filtering, and then aggregating the data.
And finally, the rule engine was ready to use; all set to help customers define their rules.
To be continued
Everything incredible in the world has a version 2 which truly defines its capabilities. And Tars is going to follow the same route. We are yet to achieve all that we know we can offer, and we know what we want to achieve as well:
Users making their own rules (for now we only offer them what we have put together in our database)
Adding more complex rules to enrich users’ experience with more capabilities.
Constantly checking our analytics engine and building more queries.
So we’re setting out on a bigger horizon. Until we get there and I share it with you, I hope this can suffice.
What’s the wait for — TRY IT OUT!
Go to https://rocketium.com and signup to see the mammoth Tars in action!