Step & Category |
Description |
Example |
Explanation |
1. Establish User Personas |
Develop detailed personas for each user category to tailor prompts accordingly. |
Persona: "Startup Owner" looking for market data vs. "Enterprise Executive" focusing on risk management. |
Identify Behavioral Patterns: Analyze how the users interact with similar technologies, their preferred communication styles, what devices they use most often, and their typical online behaviors. |
|
|
|
|
|
|
|
Determine User Goals and Needs: Understand what the users aim to achieve by using the AI system. This involves recognizing their primary and secondary goals, the tasks they need to accomplish, and the challenges they face that the AI can help solve. |
|
|
|
Assess Pain Points: Identify any frustrations or limitations users currently experience with existing tools or services. Understanding these pain points is crucial for designing AI interactions that offer real solutions. |
|
|
|
Define Expectations: Clarify what users expect from interacting with the AI in terms of output, communication style, and overall experience. |
|
|
|
Create Persona Profiles: Synthesize the collected information into persona documents. Each persona should include a name, a fictional but representative biography, key attributes, goals, needs, preferences, and scenarios depicting how they would interact with the AI. |
2. Establish GPT's Persona |
Define the persona for the GPT to align its responses with the user's expectations in terms of tone, formality, and interaction style. |
Friendly and informal for Startup Owner; professional and data-driven for Enterprise Executive. |
Voice and Persona: Setting a persona for GPT helps in maintaining a consistent voice and style that meets the expectations and enhances the comfort level of the user and drives a modified expertise to view that lens. |
|
|
|
|
|
|
|
Tone and Style: Choose a communication style (formal, informal, technical, casual) that matches the user personas and the contexts in which they operate. For example, a friendly and informal tone for startup owners and a professional, data-driven tone for enterprise executives. |
|
|
|
Personality Traits: Decide on personality traits that will make interactions pleasant and effective. Traits might include being supportive, patient, informative, or assertive, depending on the target user. |
|
|
|
Engagement Level: Determine how proactive or reactive the AI should be in conversations. Should it take the lead in guiding the user, or should it be more responsive to direct queries? |
2.b. Establish GPT’s EQ Parameters |
(Optional) Define if you want certain sentiments from the GPT, that may cause it to focus on certain aspects of personal development and emulate those devices when providing outputs. |
Strategic-oriented, where you want to set outcomes and KPIs for the project that a small agile team can perform their duties. Always position yourself to be the team's biggest advocate. Focus on human-driven metrics. |
Identify Core Strengths: Analyze the AI’s performance data to identify areas where it consistently excels. These might include natural language understanding, data analysis, empathy in responses, etc. |
|
|
|
Align with User Needs: Map these strengths to the specific needs and expectations of the user personas. For example, if the AI is particularly strong in processing and synthesizing large volumes of data, this can be aligned with the needs of enterprise executives looking for in-depth market analysis. |
|
|
|
Set Parameters for Strength Utilization: Define clear parameters on how these strengths will be utilized in AI interactions. This involves setting up scenarios and contexts in which these strengths can be most effectively applied. |
|
|
|
Develop Strength-Based Strategies: Create strategies that specifically focus on enhancing these strengths further. This might involve more specialized training in key areas or integrating additional databases that expand the AI’s knowledge in its strongest domains. |
|
|
|
Monitor and Optimize: Continuously monitor the AI’s performance in these strength areas and make adjustments to maintain or enhance its effectiveness. This includes refining the AI's training and updating its knowledge base as needed to keep it at the forefront of its strength areas. |
3. Establish GPT’s Objectives |
Define specific objectives for what the GPT should accomplish or enable for the user, aligning its functionality with user goals. |
For the Startup Owner: Generate competitive insights; For the Enterprise Executive: Provide detailed risk evaluations. |
Functional Objectives: Detail the specific tasks the GPT should be able to perform, such as generating competitive insights or providing risk assessments. These should be closely aligned with what the user needs to achieve. |
|
|
|
Performance Metrics: Set specific, measurable standards for evaluating how well the GPT meets its objectives. These could include metrics like accuracy, response time, and the relevance of provided information. |
|
|
|
Scope of Knowledge: Clearly outline the areas of knowledge the GPT needs to cover, ensuring it has the necessary depth and breadth to handle the tasks at hand. This includes defining any limitations to prevent the GPT from attempting to handle queries outside its scope. |
4. Specify Data and Validation Requirements for GPT |
(Optional) Clearly define the type and format of data needed, including any necessary data sources, data ranges, and specific metrics. Establish criteria for validating the information to ensure accuracy and relevance. If you want to source outside of credible areas and broaden by reducing accuracy, you can remove accuracy criteria. |
Request “validated quantitative data within the last 2 years, segmented by region and industry. Ensure data sources are from verified market research firms or official government publications.” |
Type and Format: Specifying that the data should be quantitative and recent ensures relevance and utility for decision-making. |
|
|
|
Data Sources: Stating that data should come from verified sources enhances credibility. |
|
|
|
Validation Criteria: Setting criteria for data validation helps in filtering out unreliable or irrelevant information, ensuring the AI focuses on high-quality inputs. |
5. Incorporate Data Best Practices |
(Optional) Specify the types of data needed according to various classifications such as dependency, order, continuity, level of measurement, source, and variability. Establish validation protocols based on these characteristics. |
Request “current employment rates (a dependent, continuous variable) from official national statistics, segmented by industry (a nominal variable) and region. Compare these with independent GDP growth rates (a continuous variable), using data from the past two years to forecast employment trends (a dynamic, stochastic model).” |
Dependency: Understanding whether data is independent or dependent shapes the analysis. |
|
|
|
Order and Continuity: Clarifies whether data should be treated as continuous/discrete or ordered/unordered. |
|
|
|
Level of Measurement: Specifies whether data should be nominal, ordinal, etc. |
|
|
|
Source: Empirical (from observations) vs. theoretical (from models). |
|
|
|
Variability: Whether the context requires deterministic or probabilistic approaches. |
|
|
|
Dynamics: Whether static (point in time) or dynamic (changing over time) data is needed. |
6. Incorporate Governance for Clarity, Language, and Ethical Standards |
Apply a set of principles across all prompt designs to ensure not only clarity and effective communication but also ethical compliance and transparency. |
Use clear language and ensure responses consider user data privacy and AI transparency. |
Clarity and Language: Ensure the prompt is concise and devoid of unnecessary jargon to prevent misinterpretation and improve response accuracy. |
|
|
|
Ethical Use: Incorporate guidelines that respect user privacy, consent, and data security to prevent misuse of AI capabilities. Transparency: Clearly communicate the AI's limitations and the nature of its functioning to the user to set realistic expectations and foster trust. |
|
|
|
Accountability: Design prompts that allow for tracking AI decision-making processes, making it easier to audit and adjust AI behaviors. |
|
|
|
Bias Mitigation: Introduce mechanisms to detect and reduce biases in AI responses, ensuring fairness and inclusivity. |
7. Contextualize the GPTs Deliverable Format |
To ensure the AI comprehends the full scope and nuances of a user's request by providing additional contextual information and specific qualifiers that refine and focus the AI's response. Even the format in which you want your GPT to perform an action |
Provide all of the user's transcripts in the form of Cornell notes. Date, Title at top (H1); left column larger concepts that tie to right column, notes and summariziations with examples. End with a conclusion using your GPT persona with focus on serving your user's personas need for main concepts, helpful metaphor. Explain in plain language and avoid technical jargon. |
Contextual Understanding: The AI should grasp not just the literal query but also the surrounding circumstances or relevant background information that might affect the response. For example, understanding the economic conditions, industry trends, or specific user constraints like budget or time frames. |
|
|
|
Adding Qualifiers: Qualifiers refine the request by specifying details such as time periods, geographic locations, demographic data, or particular conditions that are relevant to the query. These qualifiers help narrow down the AI's focus, directing it to provide more precise and applicable information. |
|
|
|
Clarifying Ambiguities: It involves identifying any ambiguous aspects of the request and clarifying them through additional questions or by pulling from contextual data available to the AI. This ensures the response is based on a clear and accurate understanding of the user's needs. |
8. Use your own training data to help |
To tailor the GPT's responses and capabilities to better match the specific contexts and requirements of the user base. |
Enhance the GPT’s effectiveness and accuracy by training it on a dataset that is closely aligned with the actual usage scenarios and data it will encounter in deployment. |
Customized Learning: Using proprietary data ensures that the GPT learns from examples that are representative of the actual challenges and queries it will handle, leading to more accurate and relevant responses. |
|
|
|
Control Over Data Quality and Relevance: By using own training data, organizations can control the quality and specificity of the data, ensuring it includes relevant nuances and specifics that general datasets may not cover. |
|
|
|
Improved Data Security: Utilizing in-house data can enhance security, as sensitive information does not need to be shared with external parties or third-party data handlers. |