The development of artificial intelligence (AI) has gone far beyond simply functioning as a search tool like Google and has increasingly supported human work. However, AI does not inherently know everything; like humans, it also needs to “learn.”
It is a key topic in an academic discussion during the AI Training Series 3, titled ‘LLM for Social Science Researchers: From Concept to Practice’, which featured Dr. Rimawan Pradiptyo as the keynote speaker on Tuesday, 14 April. Rimawan explained that humans possess capabilities that surpass AI, particularly metacognition and the ability to reflect.
“AI operates at the level of algorithms and mathematical simulations, combining patterns from various data sources to generate answers. However, AI lacks awareness and the ability to think about thinking, even though humans themselves are limited by bounded rationality,” he explained.
Another key difference lies in originality, which stems from humans rather than AI. AI’s strength is primarily retrospective; it can answer “what” and “how” questions, but predicting the future and making complex decisions remain human capabilities.
Humans also have the ability to quickly synthesize information, identify contradictions, and challenge AI-generated outputs. A common issue is that people ask AI once and immediately trust the result,” he added.
He emphasized that using AI effectively requires iterative exploration by testing different perspectives. Users often pose unrealistic questions and assume AI has human-like consciousness.
“Asking AI to think like a specific figure with a high level of awareness can lead to inaccurate outputs because AI has limitations it can hallucinate, provide incorrect answers, and lacks true understanding,” he noted.
During the training, Rimawan explained that interacting with AI should be done strategically through structured prompting, clear constraints, and repeated validation. He introduced the two-stage prompting approach, in which the initial stage involves providing clear, simple instructions so the AI can better understand the provided context.
“This technique allows AI to ‘learn’ from the context and minimizes errors in prompting. It also encourages users to engage in metacognition—evaluating whether the output is sufficient and identifying what needs improvement,” he said.
Additionally, Rimawan introduced pseudo backpropagation as an interaction strategy. Since users cannot directly modify AI’s internal parameters, the “learning” process can be simulated through a series of iterative and directed prompts.
“This is done by asking questions from multiple perspectives, testing answer consistency gradually, and providing feedback through follow-up prompts. With this approach, users can guide AI to produce more accurate and relevant outputs while maintaining human control over the analysis,” he added.
In closing, Rimawan encouraged participants to apply game theory when interacting with AI. Users should anticipate how AI will respond, predict its reaction function, and adjust their questioning strategies to optimize results.
“By understanding how AI works and being aware of our own capabilities, interactions become more effective and purposeful. The principle is simple yet crucial: know your tools and know yourself,” he concluded.
Reported by: Shofi Hawa Anjani
Edited by: Kurnia Ekaptiningrum
Sustainable Development Goals
