🎯4 Prompting Tips for At Least 40% More Accurate AI Results

👋 Welcome back to AI for SME Success, your weekly source of practical AI insights and updates that matter to small businesses.

Here’s what we’re covering today:

  • 4 Prompting Tips for At Least 40% More Accurate AI Results
  • What Social Media Users Think About AI Posts
  • AI Cyber Security Weekly Roundup
  • New WordPress Built-In AI Assistant
  • Live Training with Me — Reserve Your Spot!

🎯4 Prompting Tips for At Least 40% More Accurate AI Results

Most AI tools encourage “natural conversation.” But talking to a machine is not the same as talking to a human.

Here are four grounded in science and experience tips to get substantially more accurate results from your AI.

1. Use the Reboot Strategy

Research “LLMs Get Lost in Multi-Turn Conversation,” found that AI models perform about 39% worse in back-and-forth exchanges compared to one clear, complete request upfront. If the model goes down the wrong path early, it rarely fully recovers.

Think about it as a “reboot button,” just like restarting your computer, modem, or smartphone clears glitches and restores performance.

Dos. Front-load context in your first prompt. If the response derails, start a new chat and rewrite your request clearly. Treat AI as a one-shot tool, not a conversation partner.

Don’ts. Don’t expect AI to self-correct. Don’t brainstorm when accuracy matters.

2. Be Direct, Forget Manners

When you talk to a machine, be straightforward. Recent studies confirm that using manner-free language in your prompts improves accuracy by 5%. There are very good reasons for it. First, polite words introduce noise to the logic of your conversations. Second, they take extra compute power to be processed.

Dos. Be very straightforward. Use direct “command-like” language such as “do,” “don’t,” “wrong,” and “right.” Clearly specify what you want upfront and the format and constraints next.

Don’ts. Don’t add filler words like “thank you,” “my apologies,” or “please.” Don’t be vague or overly conversational. Don’t rely on tone to clarify intent.

3. Don’t Say “You Are Wrong”

This tip is from my own experience. Google Gemini did a great job connecting to Google Maps to calculate distances and travel times for a multi-stop route. Then I spotted an error, said “you were wrong,” and pointed it out. Immediately, the tool seemed to downgrade. Advanced routing stopped, and I started getting generic “I don’t know” or alternative-step responses instead.

Dos. Prompt AI, “show the steps you used to get this result.” Identify the specific step that caused the error and instruct it to redo the task without that step.

Don’t. Don’t just label the output wrong. It can shut down the thinking path completely.

4. Use Prompt Repetition

This one may sound weird, but worth trying. A study titled “Prompt Repetition Improves Non-Reasoning LLMs” describes an odd but effective technique: repeat your exact prompt twice in the same entry.

The same technique is further explored in the Forbes article “Prompt Engineering Finally Proves That Prompt Repetition Gives Better Answers, But Only If You Do So These Ways,” published February 21.

Dos. When reasoning mode is off and accuracy matters, repeat your exact prompt twice in the same entry when asking factual, single-answer questions. Keep the wording identical.

Don’ts. Don’t use this technique in step-by-step reasoning mode. Don’t apply it to creative brainstorming. Don’t use it for low-stakes tasks.


⚠️ What Social Media Users Think of AI Posts

The article AI Is Taking Over Social Media, but Only 44% of People Are Confident They Can Spot It published by CNET describes the results of an online survey conducted in early February 2026 among 2,530 U.S. adult social media users.

Here are the key insights:

  • 94% believe they regularly see AI-created or AI-altered content.
  • 44% feel confident they can tell what’s real versus AI-generated.
  • 51% want better labeling of AI-generated posts.
  • 21% believe AI content should be banned from social media.
  • 11% find AI content useful, informative, or entertaining.

Takeaways:

Nine out of ten social media users distrust AI-generated content, while one in five wants it banned altogether.

If you use AI for social media images, label it clearly.

Consider using AI behind the scenes. Instead of image generation, use it for tasks like color selection, design optimization, reverse-engineering best practices, brainstorming, tools, layouts, and the like.


 🛡️AI Cyber Security Weekly Roundup

This week in AI security: Cisco flagged “excessive agency” as a top vulnerability in its 2026 report, sensitive content appeared in Copilot outputs, and OpenAI introduced Lockdown Mode to reduce prompt injection risk.

What is Excessive Agency Risk?

The State of AI Security 2026, recently published by Cisco (2026), warns that Excessive Agency has become a primary vulnerability category as organizations grant AI agents unsupervised authority over critical business functions. The report highlights that 2025 “marked the end of the AI-assisted era and 2026 signals the beginning of the AI-driven era.”

Excessive agency vulnerability means granting AI agents more autonomy and system access than necessary, allowing them to execute actions, access data, or trigger workflows without sufficient human oversight.

To minimize this risk, limit AI agents to least-privilege access, control extensions, require human approval for high-risk actions, and regularly review system permissions.

The report also notes that attacks against AI systems have become industrialized and increasingly automated.

Copilot Exposed Confidential Information

Because of a simple coding error, confidential materials appeared in Copilot responses. Microsoft is currently working on a solution. The incident serves as a reminder that confidential information should always be safeguarded from AI tools.Bottom of Form

ChatGPT Lockdown Mode

In a recent update, OpenAI noted that prompt injection is a growing risk in which attackers trick AI systems into executing malicious instructions or disclosing sensitive information. Lockdown Mode restricts ChatGPT’s access to external systems and disables potentially exploitable tools. It is an optional feature available for ChatGPT Enterprise (min 150 users), Education, Healthcare, and Teachers.


✍️ New WordPress Built-in AI Assistant

On February 17, WordPress announced a new built-in AI Assistant integrated directly into its editor and Media Library.

The tool helps users write, rewrite, structure pages, adjust design elements, and generate or edit images without leaving their workflow. It works through natural-language prompts and is optimized for block themes.

Optional and easy to enable, the assistant aims to streamline content creation, simplify site design, and reduce reliance on third-party AI tools.


🖥️Live Training with Me. Reserve Your Spot!

Join me for two hands-on sessions designed specifically for small business owners:

A Step-by-Step Guide to Building Profitable AI Workflows

📅 Feb 26 | 12–1 PM EST

How To Make Impactful On-Brand Visuals with AI

📅 Feb 26 | 2–3 PM EST


Thank you for reading today’s edition!

If this issue was valuable, pass it along to a fellow business owner.

 Also, I’d love to hear your feedback, questions, or topic suggestions at natalia@nataliabrattan.com.

See you next week,


Natalia

Share this newsletter:

About The Newsletter

My newsletter turns the latest AI and tech news into practical, actionable insights for SMEs and solopreneurs who want to innovate, grow, and stay competitive.

Learn more and sign up >

Read Next