Home » Insight Collections » From Smart Replies to Social Avatars: Role of AI in Communication
AI-powered personalisation is steadily becoming integrated into our daily communications, from real-time translations that remove language barriers to smart reply suggestions that draft our messages. The expanding role of AI in communication is transforming how we interact in both personal and professional contexts.
Social and professional platforms are implementing these AI-driven features, offering more tailored interactions. As this change progresses, both executives and users are considering the convenience against concerns about authenticity, trust and privacy.
Below, we explore how AI-assisted conversation tools are reshaping social norms, public perception of these changes, the emerging privacy implications, and broader questions about the future of human connection in an AI-mediated world.
The Growing Role of AI in Communication Platforms
Leading social platforms are increasingly integrating AI tools to enhance user communication. LinkedIn has built generative AI into its profile editor to help users craft their personal biographies.
Powered by OpenAI’s GPT models, the tool offers Premium subscribers suggested wording for the “About” and “Headline” sections of their profiles. The AI analyses the member’s skills and experience to produce a custom-written summary that reflects their voice and style, which the user can then edit as needed.
Recruiters get a similar GPT-driven aid for writing job descriptions, generating a first draft based on basic inputs like job title and company. Effectively, parts of one’s professional persona can now be AI-assisted.
Other major platforms are following suit with AI-assisted communication tools. Meta (Facebook’s parent company) recently introduced “Meta AI,” a conversational assistant available across WhatsApp, Messenger and Instagram.
Users can chat with Meta AI as they would a person, even asking it questions in group chats or having it generate images on command.
Alongside this assistant, Meta launched 28 AI characters with unique personas – some modelled after celebrities like Snoop Dogg and Kendall Jenner – that people can message for advice, inspiration or entertainment.
These AI avatars even have social media profiles of their own, blurring the line between traditional online influencers and virtual ones.
On X (formerly Twitter), Elon Musk is integrating his new AI venture into the platform.
X’s Premium subscribers gained access to Grok, a chatbot built on a custom large language model with a bit of wit and real-time knowledge via Twitter’s data.
Musk has described it as a “truth-seeking” AI assistant and plans to deeply embed it into X’s user experience.
This could mean Twitter users will soon have AI at their fingertips for drafting posts, answering queries or even fact-checking content in real time.
Even communication tools outside social networking are leveraging AI to mediate interactions. Gmail’s Smart Reply and Smart Compose features have for years been using AI to suggest replies or complete sentences in emails.
Google reports that by 2017, roughly 12% of all messages sent via Gmail’s mobile app were authored by its AI “smart reply” suggestions, equating to 6.7 billion AI-written emails per day.
That percentage has likely grown, and the same underlying technology now powers quick-reply suggestions in business chat apps and customer service platforms.
Across the board, routine conversations – from quick social messages to work emails – are being optimised by algorithms that learn how we communicate and offer assistance.
User Perception: Convenient or Less Genuine?
As AI-mediated interactions become commonplace, users are divided on how they feel about them. Many appreciate the convenience and efficiency. Studies find that AI-assisted chat can indeed speed up communication and even lead to a more positive tone.
In one experiment, conversations where participants used an AI reply assistant were rated as having a more upbeat emotional tenor and proceeded faster than those without AI help.
Participants with access to “smart replies” (suggested responses) often took advantage of them – roughly one in seven messages in the study were drafted by AI. The AI tended to insert more positive language, which improved rapport and satisfaction in chats as long as both sides were comfortable.
Indeed, researchers from Cornell University noted that AI suggestions made conversations more upbeat and kept things moving smoothly, contributing to generally pleasant exchanges.
Yet there is a clear trade-off in authenticity. If people even suspect that a message or profile was generated by AI, their trust in the interaction decreases.
In the Cornell study, participants who merely believed their partner was using AI-crafted replies became more distrustful, rating the partner as less cooperative. Ironically, when AI was actually used (but not disclosed), partners found each other more likable – perhaps thanks to the polite, cheerful tone the AI introduced.
This suggests that AI can enhance communication on a surface level, but it carries an “interpersonal toll” if revealed. In the words of the lead researcher Jess Hohenstein, “by using text-generating AI, you’re sacrificing some of your own personal voice”.
Co-author Malte Jung cautioned that companies providing these AI tools could end up shaping users’ interactions and perceptions of each other in subtle ways.
Surveys of social media users echo this ambivalence. One consumer study found 72% of people believe AI makes it harder to tell what content is genuine, even as most assume it’s being used behind the scenes.
Younger generations appear more comfortable embracing AI in daily life – 84% of Gen Z respondents said they use at least one AI tool today.
They’re using AI not just for Q&A, but for creative tasks like writing and brainstorming. In contrast, older users tend to be more sceptical. There’s also a desire for honesty: when AI is used, clear disclosure can improve trust.
In the advertising world, ads labelled as AI-generated actually saw a 73% boost in perceived trustworthiness from consumers, according to the same study – suggesting people appreciate brands being upfront about AI usage.
This may hold a lesson for personal communications as well.
We’re also seeing people test AI’s help in more personal areas like dating, with mixed reactions. A recent Match.com survey of singles found that 14% of online daters have experimented with AI to improve their chances.
Among those who did, nearly half used AI to write their dating profile bio, and 37% used it to help compose opening messages. Many of these daters reported positive outcomes – a third said AI assistance actually helped them get better matches and meet partners faster.
The added polish and wit from an AI coach boosted their confidence in flirting. However, most singles still draw a line when it comes to authenticity: 50% said it would bother them if a match used AI to alter photos (presenting an unreal image), and 39% objected to a match using AI in every single conversation.
In other words, a touch of AI help is acceptable – even welcome – but overreliance on AI feels like a turn-off to a majority. People still want to know there’s a real person behind the messages and photos, especially in contexts that hinge on trust and personal connection.
Privacy and Data Security Concerns
The rise of AI-mediated communication also raises new privacy questions. To personalise replies or translate our speech, these AI systems must ingest and analyse our personal data – from the contents of messages to voice recordings – which naturally concerns privacy advocates.
Users are asking: Who gets to see this data? How is it stored or used later? The answers have sometimes been unsettling.
Several platforms have quietly updated their privacy policies to feed the AI behind the scenes. In 2023, LinkedIn disclosed that it would use members’ profile information and content to train its AI models by default.
Without an obvious opt-in, millions of users were essentially enrolled as AI training data sources – a move that drew criticism once it came to light. (Users can find a buried opt-out setting, but only if they know where to look.) Meta has similarly admitted that it leverages public user data from Facebook and Instagram to hone its AI systems.
This industry trend – harvesting personal data to improve AI – has regulators and civil society on alert, arguing that opt-out consent is not good enough. People want more control and transparency if their words, photos and interactions are going to shape machine learning models.
There are also data security risks when AI becomes the middleman in our conversations.
A notable case at Samsung illustrated the danger of sending sensitive information to third-party AI services. Samsung engineers, seeking coding help, pasted confidential source code and meeting notes into ChatGPT – not realising those inputs could be retained.
Within weeks, proprietary code and business plans had been inadvertently fed into OpenAI’s servers. Since many AI providers do keep user prompts (often to further train the AI or for moderation), anything we share with an AI might persist outside our control.
In Samsung’s case, the leak was serious enough that they banned employees from using external AI tools and started developing an in-house version.
This cautionary tale underscores a key point: when using AI in communication, we might be giving up ownership of our words, unless strong privacy protections are in place.
Consent and context are other emerging dilemmas.
For instance, if an app offers real-time transcription or translation of your voice chats, are all parties aware and agreeable to their speech being processed by AI? If one person in a conversation uses an AI assistant to draft responses, should they tell the other person? In professional settings, companies now issue guidelines on using AI – often warning employees not to input private client data or personal identifiers into chatbots.
Data laws like GDPR are also coming into play: Italy briefly banned ChatGPT in 2023 over how it collected personal data without clear consent, forcing the service to add user disclosures and opt-outs.
As AI personalisation spreads, organisations will need to ensure responsible AI use policies that safeguard personal information and respect user consent.
Finally, there’s the question of algorithmic bias and manipulation. Personalised AI replies or feeds might gradually shape what we say and see.
If an email app’s smart reply consistently suggests upbeat, agreeable responses (and buries more dissenting ones), it could subtly nudge employees toward a certain communication style – raising ethical questions about free expression.
Likewise, AI filters that auto-remove “harmful” content must be carefully calibrated to avoid silencing legitimate speech. All these issues highlight that embedding AI in personal communication isn’t just a technical upgrade; it’s a social and ethical challenge.
Companies deploying these features will need robust data governance, user education, and perhaps even AI transparency labels (like an indicator that “This message was AI-assisted”) to maintain user trust in the long run.
Future Implications: Deep Questions for AI-Driven Socialisation
The advent of widespread AI-driven socialisation prompts profound questions about the long-term impact on human relationships and culture. As we delegate more of our social output to algorithms, it’s worth considering how this might reshape our fundamental social skills and norms.
Will constant AI mediation erode human empathy or interpersonal skills? Some psychologists and technologists worry that over-reliance on AI in our interactions could blunt our ability to understand and relate to each other.
Empathy is built through observing and responding to the nuanced emotions of real people. If instead we increasingly interact via AI “buffers” – e.g. using AI to comfort a friend or having a bot as a companion – we might miss out on practising empathy in the real world.
Ritesh Jain, a tech commentator, noted that as AI interactions become indistinguishable from human ones, it can lead to “a sense of distrust and cynicism in our relationships, as well as a loss of empathy and social skills”.
The concern is that emotional intelligence could atrophy if we often let AI take over difficult conversations or fail to learn the “give and take” of genuine dialogue. For example, children who grow up commanding virtual assistants (“Alexa, do this…”) might not develop the same patience or courtesy in conversation – an issue researchers have already flagged with voice assistants potentially hindering social development in children.
On the other hand, optimists suggest AI might help enhance empathy if used thoughtfully – perhaps by prompting users with more caring responses or providing training simulations for social skills.
The net effect on human empathy will likely depend on how we use these tools: as supplements to human connection, or as substitutes for it.
How might cultural norms shift with AI enhancing or filtering expression?
Every culture has unwritten rules for communication – what’s polite, what’s too direct, how hierarchical relationships communicate, etc.
AI systems, however, come with their own “norms” (often influenced by the data they were trained on). We may see a homogenisation of communication styles across cultures if the same AI models are suggesting phrasing worldwide.
For instance, GPT-based assistants tend to favour a neutral, generally upbeat tone. If business emails around the globe start to sound like they were written by the same polite assistant, local flavours of expression could diminish.
In contrast, language translation AI can enable cross-cultural exchange more than ever – allowing people from different countries to chat or do business without a language barrier.
That could enrich cultural understanding, but it might also create a “language bubble” effect where individuals rely on machine translation instead of learning other languages or the subtleties of those cultures.
Norms around authenticity may evolve too: it could become standard practice to assume many social media posts or messages have had AI help. We might start valuing the ideas conveyed more than the exact wording (since wording might be machine-optimised).
Alternatively, a counter-trend valuing raw human expression might emerge, in the way some people now seek out organic, unedited content as a reaction to highly edited social media.
Culturally, there may also be new etiquette questions – is it acceptable to use an AI ghostwriter for your tweets? Should you disclose AI involvement in a heartfelt note or a condolence message? Society will likely establish these norms over time, much as we did with the rise of social media itself.
Will people outsource large portions of their social presence to AI?
Looking ahead, we can imagine a world where everyone has an AI “social avatar” that represents them in digital spaces.
In fact, this is already starting: companies like Meta are introducing AI characters, and startups are working on personal AI assistants that learn your style. It’s conceivable that in a few years, an executive could have an AI agent handle routine LinkedIn interactions – commenting on posts, sending connection requests, even answering basic messages – all in the exec’s tone.
Some busy individuals might maintain AI-managed profiles that stay active 24/7, while they only occasionally check in. In virtual environments and the metaverse, AI-driven avatars could stand in for us at meetings or social gatherings we can’t attend.
This outsourcing of social presence raises both exciting and troubling possibilities. On one hand, AI avatars could enable continuity and efficiency – your personal brand and relationships are tended even when you’re occupied elsewhere, potentially enhancing productivity.
On the other hand, it prompts the question of authenticity: if much of your online persona is effectively run by an algorithm, are you still “you” in those spaces? People might start interacting AI-to-AI – my agent negotiates with your agent – which could make human connections feel transactional.
There’s also the risk of identity misrepresentation or error – an AI that tweets in your name could go off-script, with reputational consequences.
Societies may respond by placing greater value on in-person interaction and “the human touch” as a premium, or conversely, embracing a new norm where human and AI social agents coexist in our relationship networks.
Balancing Innovation with Humanity
AI-driven personalisation in communication is clearly a double-edged sword. For businesses and social platforms, these technologies offer powerful advantages – breaking language barriers in real time, streamlining customer service chats, helping users express themselves more clearly, and keeping people engaged on platforms.
The business norms of communication are shifting accordingly: it’s becoming acceptable to use AI for drafting emails or posts, and even a mark of efficiency. Companies are beginning to encourage employees to leverage AI assistants as productivity tools, much as they once encouraged internet research or smartphone use.
At the same time, the social norms around trust, authenticity and privacy are being tested. Thoughtful leadership is required to ensure we don’t lose the human elements that make communication meaningful.
Executives implementing AI communication tools should consider guidelines to maintain authenticity (for example, encouraging a personal review or adjustment of any AI-generated message, so the human’s intent and emotion are still present).
Transparency can also build trust – letting users know when they’re chatting with an AI or when content was AI-assisted can prevent the feelings of deception that undermine confidence.
Finally, there’s an opportunity to shape the trajectory of this technology in a positive direction. Rather than replacing human connection, AI can be positioned as an enhancer of human connection – a tool that helps us spend less time on trivial communications and more time on deep, interpersonal ones.
Real-time translation, for instance, can foster empathy by enabling direct dialogue between people who never could have spoken before. AI coaching tools might help individuals practise social skills in a safe space.
If used responsibly, AI could even strengthen certain social abilities while taking over some of the mundane aspects of communication.
In conclusion, AI-mediated personalisation is redefining how we interact in both business and personal realms.
It offers a vision of seamless, cross-cultural, efficient communication – a world where your words are always at your fingertips and no one is unreachable due to language or time.
But alongside that vision, we must address the challenges: keeping our communications genuine, safeguarding the intimacy of human-to-human contact, and protecting the privacy of the words we share.
Striking this balance will be key as we navigate the new norms of an AI-enhanced social world, ensuring that technology serves to enrich human connection, not erode it.






