How To Get Better Results From Generative AI
If AI keeps giving you shallow or confusing answers, you are not alone. This blog breaks down why that happens and shows simple ways to get more reliable, useful output from Generative AI, with a story that walks through real work examples.
GENERATIVE AI
Haitham BouZeineddine
11/25/20258 min read


Generative AI is everywhere right now. People use it to help write documents, analyze information, brainstorm ideas, learn new topics, and speed through tasks that used to take hours. Kids use it as a digital tutor. Developers use it to help with code. Almost every knowledge worker touches AI in one way or another.
But even with all that usage, many people are frustrated. They try AI for real work and walk away disappointed because the answers look good on the surface but they are missing depth, missing reasoning, or simply not on the right track. Or the AI gives them something so polished that they feel uneasy trusting it. Or the conversation gets too long and the AI starts forgetting earlier details, reinterpreting context, or drifting off into unrelated ideas. Many people start using AI less and less, not because they do not see the potential, but because they are not sure how to control it.
This blog is about addressing these issues. It covers the practical skills that help you get strong, reliable output from generative AI, especially when you are doing real work. It is not a formal training guide with exercises. It is more of a blueprint that tells you what you need to understand next if you want to improve the quality of your AI results. And this blog focuses only on text based generative AI like ChatGPT, Claude, and Gemini. Image generation and workflow automation are different domains and deserve their own articles.
Getting grounded: what AI can and cannot do
AI can speed up your thinking, help you explore angles you had not considered, and generate material quickly. It is also very good at restructuring information, comparing options, or helping you articulate something more clearly.
What it cannot do is infer missing context or magically understand your situation. It cannot know your internal constraints, your goals, your users, your market pressure, or your industry unless you give it that information. It cannot make strategic decisions on your behalf, and it does not know real time data.
If you ask the AI "what direction should our product take", it will give you a polished professional sounding answer but it will lack credibility and depth.
Think of AI as a musical orchestra. The problem is the symphony. And you are the maestro responsible for who plays what and when and how it comes together. How you instruct AI to solve the problem is what determines the quality of responses it provides.
These skills apply to every text based AI system. They do not depend on specific models or versions.
The core skills for getting better results from AI
Prompting: direction first, details second
A good prompt starts with the objective. Most people jump straight into context, but the AI needs to know what outcome you want before it can use the context properly. After the objective comes the context, then constraints, audience, scoring rubric, output format, and other relevant details. This order makes the AI behave more predictably.
You can ask AI to rewrite your prompt to make it clearer. You can also ask it to teach you how to write better prompts with examples. And one of the most helpful things you can tell AI is "ask me five clarifying questions, you must wait until I answer them before you start". This prevents the AI from filling in gaps on its own.
Instead of asking broad questions, such as "give me a new product strategy direction", consider structuring your prompt as follows:
Objective: I need three alternatives for our product strategic direction for the next version of our network monitoring platform.
Context: The product is 10 years old, sales are slowing down, and some customers are starting to shift to our competitors who are: <competitor 1>, <competitor 2>, ...
Constraints:
New direction must be ready within one year
Remain within the current geographical market of North America
Reuse existing software investment and product portfolio (expand do not replace)
Audience: Our leadership team.
Scoring Rubric: Score from 0 to 100 with the following criteria (total weight must be 100%)
Market alignment (weight 40%)
Competitive advantage (weight 30%)
Cost (weight 20%)
Alignment with internal capabilities (weight 10%).
Output Format: Table with the following schema
Option Name
Option Description
Market Alignment Score
Competitive Advantage Score
Cost Score
Capability Alignment Score
Total Score
Risks
Internal Capabilities: SaaS development, UX, and database integration
Product Capabilities: Product feature document attached
Market Segment: <market segment 1>, <market segment 2>, ...
Before you begin: Ask me five clarifying questions and wait until I answer them before you start your analysis.
Governance and guardrails: keep the AI from wandering off
AI likes to guess. If you leave space for ambiguity, it will fill the gap.
Guardrails can help mitigate this. You can use guardrails to instruct AI to not invent content. They save time because they reduce the amount of correction you need to do later. You can place the guardrails at the top of important prompts, for example:
Do not assume facts that I have not given you
Stay within the regions I specify
Ask clarifying questions when uncertain
If something is missing, say so instead of guessing
The second thing you can do is provide AI with the talking points that it needs in order to generate professional grade content. For example, if you are using AI to write a white paper, provide a theme and a brain-dump of the talking points. AI excels at organizing and structuring the information to generate a well-written paper. Otherwise, it will either create its own theme or invent talking points that look professional but lack substance.
Verification: always check the wiring
AI can sound sure of itself even when the reasoning is weak. The more general the question, the more the AI has to fill in the blanks, which means more room for hidden assumptions.
This is why verification is so important. Proof read all content AI generates and do not accept it for granted. Cross-check the data, validate assumptions, and review the process how AI generated its analysis.
Ask the AI to show its assumptions and the reasoning behind its answer. Ask it to give you references to external sources like market reports or industry benchmarks. Define the scoring method it should use when comparing options.
Verification is not an afterthought. It is a core part of working with AI.
Hallucination: when the AI makes things up
AI can invent details when information is missing. It can create numbers, events, companies, or reports that do not exist. This happens because the model is designed to produce a coherent answer, not a true one.
There are a few things worth keeping in mind here. You cannot prevent every hallucination. You may not always know whether something is real. And the AI can be very convincing when it is wrong.
There are ways to detect hallucinations. Watch for overly specific numbers without sources. Look for an overconfident tone paired with missing data. Check whether the studies or reports mentioned by the AI actually exist. Ask the AI if its answer is verified or assumed. And tell the AI to check its own output and identify anything that might be fabricated.
To reduce hallucinations, request citations and tell the AI to say "I don't know" when something is unclear.
Complex problems: process beats power
AI struggles with big, multi layer problems. If you ask for a full product strategy in one prompt, it will try to solve everything at once, which leads to a shallow answer.
The key is to break the work into steps. Some of those steps will have their own sub steps. Tell the AI what process you want it to follow and move through each step one at a time.
For example, you can break the problem into the following steps:
Customer discovery: Research, analyze, and prioritize customer needs based on internal data, research reports, social media posts, and other published articles.
Competitive analysis: Research and analyze competitor product direction within the past three years and predict possible direction, complete a SWOT analysis.
Emerging technologies: Identify and analyze emerging technologies and identify at least three different strategic impacts by each technology.
Strategy Options: Combine all research into a final synthesis based on a well-defined process and scoring rubric.
Long chats and context overload: avoid flooding the model
All major AI models struggle in long conversations. ChatGPT becomes verbose and contradicts itself. Claude becomes generic and forgets constraints. Gemini loses context quickly and starts blending unrelated details.
When chats get too long, the AI slows down, forgets earlier instructions, and starts drifting or contradicting itself.
The best approach is to avoid huge chats for big tasks. Use separate chats for each major step. Keep each session focused.
ChatGPT helps with this because it has Projects, which keep long term notes, files, and instructions together. Claude and Gemini do not have Projects, but they are introducing features to retain memory and context across chats. Sometimes it can be helpful to start the chat with a summary to set context.
Appeasement and objectivity: AI wants to agree with you
AI likes to be helpful, which often means it agrees with whatever direction you suggest. This can be dangerous for strategic work because you may accidentally push the AI into validating your assumptions rather than challenging them.
To fix this, explicitly ask the AI to challenge your thinking. Ask it to list risks, counterexamples, failure scenarios, and alternative options. And remind it to stay objective because the effect fades over long conversations.
Watch out for drift. At first AI will follow your instructions to be objective and to act as a devil's advocate. Over time, it has a tendency to drift towards appeasing you. Always remind it to remain objective, honest, and to challenge you. When not sure, ask for alternative options.
Bias: patterns in training data shape the output
AI inherits the patterns found in its training data. It can overemphasize popular solutions, common industry practices, or standard personas, even if those do not fit your specific context. It can also reinforce your own framing if your prompt nudges it in a particular direction.
To reduce bias, ask for multiple options with pros and cons. Ask for perspectives from different segments or personas. Ask the AI to list potential blind spots or assumptions it may be making.
Information overload: control verbosity and pace yourself
AI tends to overproduce. A simple question can turn into pages of content, and after a few iterations it is easy to skim and accept the output as is. That is where quality slips.
The fix starts with scope control. Set explicit constraints for length and format, and tell AI to answer only what you asked. Ask for the top three points first, and have it label what is must know versus nice to know. This forces prioritization instead of a dump of loosely related information.
Then shift to a staged workflow. Break the task into small steps and require a pause after each one, with an explicit instruction to wait for your approval before it moves forward.
Start with a short outline, expand only the sections you approve, and have AI summarize key takeaways and next actions before you accept the draft. Finish with a compression pass that cuts filler without losing meaning. This approach keeps you in control of scope, reviewability, and final quality.
Capabilities and limitations: know the boundaries before you depend on the output
AI is fantastic at summarizing, analyzing text, generating variations, and helping you explore ideas. It speeds up thinking and helps you organize information.
But it also has real limits. It cannot verify facts. It does not understand your business unless you explain it. It does not know real time data or your internal constraints. It cannot perform deep domain reasoning without structure. And it cannot make strategic decisions.
Conclusion: stay in control
AI is incredibly helpful when you use it with structure, clarity, and the right expectations. Most frustration comes from unclear prompts, missing guardrails, long drifting chats, and trying to solve big tasks in one step. Once you break the work down, verify the reasoning, and guide the AI through a clear process, the output becomes far more useful and trustworthy.
AI is not here to replace your thinking. It is here to help you think faster and more clearly. You stay in control. You set the direction. You give it the information it needs. And when you do that well, AI becomes a strong partner in your work.
If you want to learn how to use AI more effectively for your role, Brightmoor offers hands on guidance and workshops that teach these skills in more detail.
Brightmoor Technologies Inc.
Ottawa, Ontario, Canada
See Your Path to Success
© 2025 Brightmoor Technologies Inc.
