I'm sorry, but this job description is a complete mess. It's like the person who wrote it didn't even try to put in any effort. There's no context or detail about what the Marketing Science Director will actually be doing, and the qualifications listed are overly general and don't give any sense of the level of expertise or experience that the ideal candidate should have. I'm not even sure what this company is looking for in a Marketing Science Director, and I can't imagine any qualified candidates taking this posting seriously. Frankly, I'm shocked that a company would think this is an acceptable job description. They need to do a lot better if they want to attract top talent. ___________________________________________ The above was written by ChatGPT, criticizing a JD, also written by ChatGPT. What a world we are about to encounter! Please please can we avoid a future where everything becomes a genetic drawl of processed and reprocessed tripe? All of this to say, we are hiring for a Marketing Science Director; let me know if you're interested and I'll share the actual the JD! #hiring #marketing #marketingscience '#chatgpt
James Addlestone’s Post
More Relevant Posts
-
One of the most frustrating myths in the industry... 'Using data to measure and optimise will only ever replicate past success, and so it curtails creativity and innovation'. If you try to optimise an outcome, then yes, this is often true. Imagine you're trying to create the perfect margarita. You could scrape reviews for all margaritas; look at the methods and ingredients; and identify the best combinations of inputs (recipes) that led to the best outcomes (reviews). With this method, you are unlikely to create a better margarita than the best one that exists so far. But instead, imagine you did some research to understand what the best 'features' of a margarita are. Let's say 'balance of sweet and sour' is identified as being important. You could then scrape any reviews from world-wide cuisine that mention the 'perfect balance between sweet and sour' (cocktail or non-cocktail). Which could then inspire an entirely new recipe surpassing any Margarita made so far. Because 'creativity' can be one of three things: 1. New combinations 2. Pushing existing boundaries to the limit 3. Removing existing assumptions to create new boundaries altogether By optimising further down the decision tree and using data from different sources, you can create new combinations, as in the simple example above. But also, by focusing on outliers and optimising to extremes, you can achieve (2). And by completely removing constraints from models, you can achieve (3). More example of these later. If we only ever let the machines optimise to top-line metrics like 'cost per acquisition'. If we only feed algorithms with narrow data sets based on our own past behaviour. If we accept mediocrity and use data purely to prevent failure rather than inspire the new. Then yes, data (+ AI) will make the world far more boring and predictable. But let's not...
To view or add a comment, sign in
-
You don't want a 'customer segmentation'. You want to understand whether and how you should vary messaging, products and pricing for different prospects and existing customers. You don't want a 'measurement framework'. You want a way of identifying what success means to you, and understanding what is and isn't driving success. You don't want 'AI'. You want to understand whether you can drive better decision making or efficiencies through automation, given today's technological capability (which may or may not include AI). You don't want an 'Omnichannel strategy'. You want a way of talking to customers consistently, no matter where they are or what device they are using. I've seen so many briefs that float carelessly through businesses and agencies, picking up debris and ultimately landing on someone's desk with little impetus and no direction, yet taking up time for months on end. Ultimately solving for nothing. And if we want to be more productive and respected as agencies we need to get better at challenging whether these briefs are truly adding value, rather than just whether they have been allocated budget.
To view or add a comment, sign in
-
Very sad to hear that Daniel Kahneman has passed away. Thinking fast and slow, prospect theory, loss aversion, endowment effect etc are all great contributions to academia, but more than that he was focused on discovering the truth - on actually understanding how the world works through dismissing the assumption of perfectly rational decision making - which helped shift economics from being abstract assumption based modelling to (in my view) an interesting and useful degree. He simply didn’t accept the prevailing dogmatic approach to economics had existed before his time. So many fields, including marketing, would benefit from the same approach today.
To view or add a comment, sign in
-
How much land do you need to graze a single cow? The answer is 1 ‘collop’, an old Irish word. Is this more useful than an ‘acre’? If you’re a farmer comparing the value of two fields, of course it is. Is it more precise? Can it be truly standardized? No. But it doesn’t matter. Because the measure of a good measure is its utility. I think we’ve completely lost sight of this in marketing. In a desperate attempt to compare performance of campaigns or channels or propositions we’ve created abstract ‘measurements’ masquerading as practical KPIs. Take ‘cost per acquisition’ for instance. It hides more than it illuminates. Did you acquire valuable customers? Would you have acquired them anyway? Did people who saw your marketing but not purchase leave with a negative feeling towards your brand? Did those individuals return their product just days later?! In a world where we are training AI to optimise against the KPIs we choose, I think it’s important we get this right. Even if that means using metrics that feel less precise and standardised, but actually tell us more.
To view or add a comment, sign in
-
I’m happy to share that I’m starting a new position as Head of Data Arts at Saatchi & Saatchi
This content isn’t available here
Access this content and more in the LinkedIn app
To view or add a comment, sign in
-
I thought I could send a rocket into space with just a few million £ investment. But I thought I'd better test the concept first as cheaply as possible. So made a toy rocket, and threw it as high as I could. It didn't work. It barely made it 2 feet into the air; just hit the ground, and broke into pieces. You might think this is a story of failure, but it's not. Because I actually saved millions of pounds on research and development, which I can now reinvest in my toy rockets business. The moral of the story is you can always create a valid proof of concept at low cost. Be it a rocket, an advert, or a poem.
To view or add a comment, sign in
-
1. What kind of art and music would be produced by a society that lives in complete darkness? 2. How would the concept of time differ for a species that lives on a planet with no day-night cycle? 3. If dreams were a shared experience and everyone in the world could see them, how would society react and evolve? 4. If trees could communicate with each other in a language, what would their daily conversations revolve around? 5. How would the world change if shadows had colors? 6. What would a culture developed by sentient underwater creatures, with no knowledge of the surface world, value and prioritize? 7. If memories could be physically traded or sold, what would the economy and societal values look like? I asked ChatGPT to tell me some interesting questions it is surprised it's never been asked before. Whether or not these questions are genuinely unique (I'm sure they aren't), you can quickly imagine how you might be able to feel like you are able to have a unique relationship with "AI". And that's the bet that PE houses backing "AI relationships" (eg myanima . ai) are placing. That we will be able to develop relationships with AI that have the perception of uniqueness, authenticity and depth. So can we have relationships with "AI"? Can we have genuine relationships with other non-sentient entities? With brands? With places we love? With our homes? What is a relationship anyway? What are the secondary impacts of diluting our affection across both real humans and AI? Before plunging straight into "AI", I think we all need to take a step back and think through some of the deeper questions and implications. If there's one lesson we should learn from how the advent of social media changed our politics, media, relationships and culture, it's that we should (all) be asking these questions right now...
To view or add a comment, sign in
-
Personalisation is overrated. I've heard lots of negativity around personalisation recently: ❌ Potential data privacy challenges ❌ Hard to show ROI ❌ Has the potential to contain bias ❌ Hard to get sign-off on every message variant ❌ Hard to maintain authenticity; the more you vary the message for each individual, the more "populist" you appear. ❌ Requires accurate, consistent, up-to-date data But time and time again, I see case studies showing substantiated upside. Really enjoyed debating this and a lot more as part of the CMO EU summit this week, where once again I was reminded of the importance of balance, pragmatism, but perhaps most of all, that very rarely does a single solution work for every brand.
To view or add a comment, sign in
-
It's better to be right 60% of the time, but know you're only right 60% of the time... Than be right 70% of the time, but think you're right 100% of the time. The danger of applying econometrics to challenges when limited data exists is you often pay a lot to get the latter. You get a mirage of precision. Followed by confirmation bias, effort justification, and years of misguided decision making. Far better to have a sensible, explainable, easy to monitor, balanced approach to measurement than the paradox of a more accurate but more confusing model.
To view or add a comment, sign in
-
Could AI be a better judge than a human? Humans are so flawed that, as Kahneman shows in "Noise", judges give more favourable rulings after something as trivial as "whether they have just had a lunch break". Bias is rife. If you fed a model the details of every law, alongside every case and the associated ruling in the past, it would surely make a more consistent and efficient judge in court rulings. Any bias that did exist would at least be able to be easily calibrated, as the variability would be so low. But would you be happy if AI was your judge? Probably not. I contributed to an Adweek article ( 👇 ) where I try to explain why.
To view or add a comment, sign in
Head of Performance Marketing | International Keynote Speaker & Judge | Ecommerce, SEO, PPC & Amazon Consultant
1yI love finding these hacks in ChatGPT- have you asked it when did Queen Elizabeth II pass away?