Home / Blog / Creating assessment questions with iQ, part of eNetAssess – Frequently Asked Questions

Creating assessment questions with iQ, part of eNetAssess – Frequently Asked Questions

iQ, the latest AI-powered enhancement to eNetAssess, transforms the way workplace assessments are created. Designed for assessment professionals, iQ makes generating high-quality, relevant questions effortless, whether you’re tackling complex topics like "safety protocols when working at heights" or building foundational knowledge checks.

Cost-effective and easy to integrate, iQ empowers you to:

  • create high-quality, relevant questions tailored to your assessment needs with ease;
  • customise and refine AI-generated questions to align with your specific learning objectives; and
  • maintain rigorous quality standards with built-in quality control features that reject simplistic or irrelevant prompts.

Below you'll find answers to the most commonly asked questions about iQ.

Posted 26 November 2024

Frequently asked questions about iQ

Q. Can the system accommodate a document reference library, particularly one that includes our question writing guidelines, style guide, and syllabus? The questions generated must align with the intended learning outcomes.


A. Yes, we would go through a process of fine-tuning which would require us to prepare any training data. Within this training data, we can upload for examples learning outcomes. Once we’ve done this fine-tuning exercise this will allow us to both make reference to documents like the question writing guidelines at a prompt engineer level, but also allowing IWCF staff to at a question generation level refer to specific learning outcomes etc.

Given that we write questions at various levels, can the AI be programmed to create questions based on Bloom's Taxonomy?


Yes, as part of the fine-tuning exercise we could provide training data which would have some examples of different blooms so that the AI is able to generate based on blooms taxonomy.

We categorise questions by importance levels—critical, necessary, and foundational. Can AI incorporate this classification into the question-generation process? Can AI incorporate this classification into the question-generation process?


Similarly to blooms taxonomy we can do the same exercise for importance levels by fine-tuning results by training on what different importance levels are. This would enable IWCF staff to utilise importance levels as part of their question creation request.

When creating questions manually, we utilise a specific template that includes the question stem, correct answer, and distractors. Will the AI be able to generate questions using a similar template?


The tool currently has two factor authentication to ensure the safety of all the wonderful content you and your organization create.

We require a grading analysis for each question, describing the outcome without revealing the question itself. Candidates are given this analysis post-assessment to guide them on areas of improvement. Can the system generate this type of analysis? This also forms part of the questions writing template that we currently use.


Yes, as mentioned above we would create a suitable question based on a template. We could utilise the feedback field to capture information which doesn’t give away the question but provides suitable feedback for a grading analysis sheet to give the candidate a list of learning areas they need to improve on. In addition to this, such a report could be updated with things like visual charts to demonstrate areas of strength and weakness.

The correct answer and each distractor must come with clear explanations outlining why they are correct or incorrect. Can the AI provide this level of detail? This is part of our current process and included on our questions writing template.


Yes, we could include this information as part of the data which is written out. As the Qi module is implemented directly within eNetAssess – a field would need to be included for capturing this data if this is currently a process managed outside of eNetAssess at present.

Since our review process involves a minimum of three people reaching a consensus, will the system allow for questions to be saved and edited following this review?


Yes, questions created with Qi will be just like other questions created in the system and can be saved without validating and can be subsequently reviewed and edited until it is approved.

Is there a designated area within the system for holding questions before they are finalised?


As mentioned above, questions are created in the same manner as standard questions in eNetAssess with fields automated therefore this would mean questions could be created but not validated/approved and require manual approval before they are finalised.

We currently do not input questions in English into ENA until they have been translated into core languages. For dual language support, specifically Arabic, the English questions must match the Arabic translations. How can this requirement be integrated into the workflow?


We would want to introduce a feature within Qi which would allow you to create multiple components at once which will allow translated versions of the question to be created multiple times. Because the AI will have been fine-tuned, translations should be accurate enough and would just need to be reviewed.

Alternatively, we can also allow you the functionality to create translated versions of question components which haven’t been created with AI to help streamline this process if the core language component is created first and you wish to create an automated English component from the core language question.

Will the AI learn from the edits we make for the future creation of content? Does the system track what has been created?


Partially. We would not have this as an automated process however if at a later date you wished to re-upload some training data to the fine-tuning to optimise the responses, we could accommodate this process when and where required.