Cover image: Effie Webb, reporter at the Bureau of Investigative Journalism
Danny Bones has the profile of a successful British rapper: millions of views on TikTok, hundreds of thousands of streams on Spotify, a loyal fanbase drawn to his music about immigration, national identity and cultural decline.
The only problem is that he does not exist.
Danny Bones is an AI-generated character, built and managed by an anonymous group called the Node Project.
An investigation by the Bureau of Investigative Journalism uncovered that the same group had been commissioned by Advance UK, a far-right party founded by former Reform UK figure Ben Habib, to produce political campaign content, making this one of the first documented cases of a British party paying an AI operation to shape its public messaging.
The investigation was carried out by Effie Webb, reporter at the Bureau of Investigative Journalism, whom we had the pleasure of interviewing.
I would like to start from the Danny Bones investigation and then ask you more general things about online information. How did you first come across Danny Bones? And how did you realise that this wasn’t a real person, but an AI-generated character?
Spotify actually recommended it to me through a Discover Weekly playlist. Initially I thought it was really good and didn’t think much of it, but then I pressed on the profile. My AI radar is pretty good, so I immediately thought: OK, that’s an AI-generated avatar. I was attracted by it and started exploring it more out of curiosity. I discovered they’ve got not just one song, but four songs. They were not verified artists, Spotify doesn’t do that anymore, but it was a registered artist, so I thought it was a legitimate thing. I then went over to social media, where they were even more successful and had millions of views on TikTok, and they were across all the platforms. I was kind of hooked, because I’ve seen plenty of AI avatars, influencers, memes and slop, but I had never seen a really convincing, successful musician and character before. That’s how I found Danny Bones, and the rest followed from there.
This was a pretty complex operation, and behind it there was the Node Project. Can you tell us a bit more about the Node Project? How large is this group, how sophisticated is it, and what is the scale of the operation?
When I went on the Node Project website for the first time, I was struck by how professional and legitimate it looked, what you might expect from an influencer agency website. It described itself as a collective, and I assumed there was more than one person behind it. But it was very opaque in terms of team members and organisation. There was no indication of their location or origin. Obviously the character Danny Bones is presented as a British Patriot, or whatever you want to call it, so I assumed there was a UK link, but I was trying to track down who on earth was behind this. It’s impressive both visually, it is relatively hyper-realistic, and a lot of people I’ve spoken to didn’t initially see that it was AI, and musically: the music was not just good quality, but it came from a consistent character. Keeping an AI avatar consistent from video to video, with the same voice, is one of the most difficult things in AI. So I thought: this isn’t just anyone, this must be someone with real technical skills.
But the Node Project isn’t only behind the Danny Bones character, they also created content for Advance UK. What is the significance of this link? Is the Node Project an independent group? Is this a political operation that also makes music? Or, on the contrary, is this a group that makes content online and also does politics?
I think the latter. Initially I thought the Node Project was just behind the Danny Bones character. In their communications, they hint that Danny Bones is the first of more avatars down the line. But during the investigation I was able to link it with Advance UK, a Christian Nationalist Party that was posting a lot of AI-generated political content on their X account. Because I’d already seen some of the Danny Bones and Node Project content, I could see similarities in style, the kind of cuts, the AI imagery, even the voiceover seemed very similar, and obviously the British patriarchal theme. Then I noticed that at the end of the two-minute Advance UK campaign video that went viral, there was the Node Project logo. At that point, I started to investigate whether the Node Project was just an entity funded by Advance UK, or a separate thing. To be honest, it’s somewhere in between. It is a separate, independent project. They make content. The Danny Bones figure has nothing to do with the party, they made that clear when I put it to them. The Advance UK content was commissioned on a freelance basis, as both Advance UK and the Node Project told me. That’s when I noticed what was really significant: we’ve been discussing the threat to democracy and elections that AI might pose, and 2024 was widely considered the year of deepfake videos, with so many elections taking place. We had always spoken about this very theoretically, but Danny Bones was the first avatar that was really convincing and directly linked to a political party. Other parties may have used AI before, but this is someone being paid to create content without being accountable, because they are just an AI character.
On this last point, do you think there’s a difference in paying for a group that creates content through AI? Or is it the same thing as having an influencer or a content creator run a political campaign for you?
I’ve been thinking about how different it is and whether it’s worse. I don’t necessarily think there’s much material difference. And I think increasingly, in the next year or so, we’ll see AI influencers becoming the new norm, in politics as much as in commercial industries. What worries me more is that it’s a faceless operation. There is no one to hold to account and no one to report to. In my investigation, I didn’t report them to TikTok and the other platforms, but I did flag that I was going to include this content in the piece, and TikTok did ban their account. But the accountability kind of ends there. I then went to the Electoral Commission, which regulates elections and campaigning, but even if it had turned out they’d been in breach of some election campaigning rule, it would have been very difficult to pin the blame on anyone. So I think in a way, the party is putting themselves at more risk by using an AI influencer, because if it’s a real person and something goes wrong, you can go to them. I think it’s probably too early to tell, but legally it’s a bit of a minefield.
Beyond accountability, what do you think is the difference between AI-generated campaigns and content spread by bot farms and fake accounts?
I think the difference is that with AI-generated content the operation is more centralised and recognisable. With the Danny Bones character there were essentially two things going on. One was Advance UK paying for Node Project content that wasn’t necessarily breaching the platform rules, it was patriotic messaging, AI-generated yes, but just spreading a political message. At the same time, the Node Project was publishing and hosting the Danny Bones content, which was less about misinformation and more about hate speech, Islamophobia, and that kind of thing. I think a decentralised, fragmented network of bots is arguably less likely to carry the legitimacy of a political party behind it. Because it was one figure, the Danny Bones character could more easily endorse the party and function as a genuine influencer. Anonymous bots are simply not influential in the same way.
Moving from the politics of this operation, what is the economic engine behind it? What does this tell us about the monetisation of AI-generated content on social media?
I don’t think there’s a problem with AI being monetised per se. The problem this investigation highlights is that they are monetising content that is rage bait, hate speech, and material that is slightly conspiratorial or has conspiracy-adjacent language. This is not the first time it’s happened. Platforms do incentivise engagement in that way, prioritising content that provokes rage or anger, and the Node Project content is very successful at doing exactly that. In terms of monetisation, Danny Bones had three income streams. The first was the platforms: they were monetising on Spotify and X, though I’m not entirely sure about the latter. YouTube is a no, because they’ve stopped monetising AI-generated creators and avatars. The second stream came from memberships and merchandise, which I wouldn’t generally object to, but in this case it’s all related to racist content. The third is party-political revenue. It must be said that the operation was very robust and had a sophisticated setup. I’d imagine they’re spending a fair amount on AI tools, but then again, these tools are pretty cheap and they might actually be making a decent profit. They did tell me by email that they weren’t making a meaningful amount of money, I can’t confirm that, but that’s what they said.
And your second investigation uncovered another economic operation involving meme coins and crypto trading.
The crypto side of it is really interesting. I’m seeing it also with the Iran war going on. There are all these meme coins and political meme coins that pop up, especially in the wake of events such as the Iran war, or linked to characters similar to Danny Bones. X and the crypto space are full of these sort of political and controversial events that quickly get turned into a very volatile coin, one that peaks and troughs extremely quickly. The Node Project, to their credit, weren’t involved in that per se. But what was really interesting was that the people pushing the meme coins around Danny Bones had nothing to do with UK politics, from what I could tell. So it’s very much a hate economy, as opposed to any kind of political operation.
Apart from the political accountability side, there is also a problem of content moderation and verification. During the Danny Bones episode, Grok, X’s AI assistant, told users that Danny Bones was a real person, not AI. What does this tell us about the reliability of AI-based moderation tools? And how does it highlight the importance of human oversight over online content and community notes?
On X, the shift from proper human trust and safety teams to Community Notes isn’t a bad model in principle, but it isn’t great in practice, as the Community Notes service is very broad. It doesn’t deal with the hate speech element at all. It purports to address misinformation, but there’s obviously so much content on X that volunteer community groups are never going to catch it all. And from what I’m seeing, Grok seems increasingly to be taking over that role. People are just going to ask Grok whether content is verified and legitimate. Based on the Danny Bones example, Grok doesn’t have a great grasp on reality, which reveals that AIs aren’t great at detecting other AIs. So I think it’s a kind of race to the bottom on X in particular. AI content moderation in general gets a lot of criticism, but it’s hard to see it working because we only really hear about it when it fails. There is a huge amount of content that AI can moderate successfully, and at the same time, a huge amount that slips through the cracks. The interesting question for future developments is whether AI moderating AI can ever be reliable.
Let’s move from the online environment to real life. Danny Bones was the first case in which there was a realistic AI influencer endorsing a political party. Do you think there is a real threat that AI influencers will actually shape elections? Or is this more about acknowledging that AI-generated content is now a fact of life?
People are frequently tempted to jump to the worst-case scenario, that this is a threat to democracy and that fair elections are at risk. People in the policy space have picked up on this and claimed that AI is going to be a huge threat. I don’t want to go that far. I think it implies treating this as a new, alien thing, while ignoring that we’ve been living with this threat for years. Social media in itself was already a threat to democracy long before AI. I’m speaking a lot about X here, but I’m thinking of other platforms as well. We’ve seen over a few years the impacts that algorithms, whether on social media or elsewhere, are having. AI is just the cherry on top. With a Danny Bones-type avatar, yes, maybe there are going to be some people influenced by it, who listen to his music and maybe adopt his political stance. But I think we’re a fair way off from very explicit party propaganda delivered by an AI influencer. I don’t think people would get behind that at this point. This episode itself has demonstrated that people can react. In his comments, some people were saying this was really cringe, that it was a bit disingenuous, but that’s not to say it wouldn’t work for some.
We’ve talked about moderation and the possibility of platforms regulating AI content and hate speech. But your investigation showed that journalism now plays a significant role. What role do you think journalism can play in countering this phenomenon without becoming part of the problem by amplifying its visibility?
It’s always going to be the case that if you cover something harmful online, whether it’s AI or not, you are giving it a platform, giving it a voice, and especially if it’s an influence operation like this, you’re giving them free press and PR. And indeed, we did see that play out. They only launched their memberships after we published. All the crypto activity happened after we published. That’s a double-edged sword. In terms of my own approach, it comes down to evaluating the value of the story. Covering every single piece of AI content online isn’t feasible. The only way to narrow it down is to have some kind of selection criteria. Personally, I follow a few considerations: first, is there a political party involved? Second, are there real-world harms, or a realistic prospect of them, inciting violence, scams, frauds? Take the crypto activity, for example: that’s pretty close to the real world, people could actually lose money. Another important consideration is AI deepfakes and synthetic images in conflict zones and war situations, that needs to be covered and corrected. We can’t just leave that type of content online unchallenged. But yes, journalists do amplify this type of content, and that needs to be thought about carefully.
This investigation revealed how rapidly AI is developing in a very short time. What is your view on AI tools becoming more accessible and widespread, and how will this change the online information landscape, for better and for worse?
In the last two years, the commercial generative AI models we use all the time have not necessarily become cheaper, I think actually they’re becoming more expensive, despite what a lot of people say. What we are seeing is much wider adoption, and also a slight tension emerging with companies like OpenAI or xAI, who are noticing that if they make their models looser, fewer guardrails, less content restrictions on what you can generate, whether that’s synthetic media or text, they get more user retention. Put too many boundaries on your model and you lose users. So we’re seeing not just greater ease of access and use, but models actively loosening their boundaries, much as we saw with social media, because at the end of the day these tools are a commercial enterprise. The information landscape is hugely compromised, not just in terms of synthetic media and AI-generated content, but also in terms of the data these large language models are trained on and the information they then provide to users. Ultimately, it’s a problem of asymmetric information. It’s a fascinating area, but extremely difficult to measure as a journalist or researcher, because unlike social media, where you can track misinformation spread to some extent, with personalised chatbots, everyone has their own version of information. It’s really tricky, and I think we need to find ways to make these companies more accountable. But I see that as unlikely in the near future.




