[Microsoft] AI-102 - Azure AI Engineer Associate Exam Dumps & Study Guide
The Designing and Implementing a Microsoft Azure AI Solution (AI-102) is the premier certification for AI engineers who want to demonstrate their expertise in building and managing AI solutions using Microsoft Azure. As organizations increasingly adopt AI and machine learning to drive innovation and efficiency, the ability to design and implement robust, scalable, and secure AI solutions has become a highly sought-after skill. The AI-102 validates your core knowledge of Azure AI services, including computer vision, natural language processing (NLP), and generative AI. It is an essential milestone for any professional looking to lead in the age of modern AI development.
Overview of the Exam
The AI-102 exam is a rigorous assessment that covers the development and implementation of AI solutions in Azure. It is a 120-minute exam consisting of approximately 40-60 questions. The exam is designed to test your knowledge of Azure AI technologies and your ability to apply them to real-world development scenarios. From planning and implementing AI infrastructure to managing cognitive services and deploying AI models, the AI-102 ensures that you have the skills necessary to build and maintain modern cloud-managed AI applications. Achieving the AI-102 certification proves that you are a highly skilled professional who can handle the technical demands of Azure AI development.
Target Audience
The AI-102 is intended for AI engineers and developers who have a solid understanding of Azure services and modern software development practices. It is ideal for individuals in roles such as:
1. AI Engineers
2. Software Developers
3. Data Scientists
4. Solutions Architects
To qualify for the Microsoft Certified: Azure AI Engineer Associate certification, candidates must pass the AI-102 exam.
Key Topics Covered
The AI-102 exam is organized into five main domains:
1. Plan and Manage an Azure AI Solution (15-20%): Designing and implementing effective AI solutions and choosing the right Azure AI services.
2. Implement Content Moderation Solutions (10-15%): Implementing security and moderation features for AI applications.
3. Implement Computer Vision Solutions (15-20%): Implementing solutions for image and video analysis using Azure AI services.
4. Implement Natural Language Processing Solutions (30-35%): Implementing solutions for language analysis, speech recognition, and translation.
5. Implement Knowledge Mining and Document Intelligence Solutions (10-15%): Implementing solutions for data extraction and document analysis.
6. Implement Generative AI Solutions (10-15%): Implementing solutions using Azure OpenAI Service.
Benefits of Getting Certified
Earning the AI-102 certification provides several significant benefits. First, it offers industry recognition of your specialized expertise in Microsoft's AI technologies. As a leader in the AI industry, Microsoft skills are in high demand across the globe. Second, it can lead to increased career opportunities and higher salary potential in a variety of roles. Third, it demonstrates your commitment to professional excellence and your dedication to staying current with the latest AI development practices. By holding this certification, you join a global community of Microsoft professionals and gain access to exclusive resources and continuing education opportunities.
Why Choose NotJustExam.com for Your AI-102 Prep?
The AI-102 exam is challenging and requires a deep understanding of Azure's complex AI features. NotJustExam.com is the best resource to help you master this material. Our platform offers an extensive bank of practice questions that are designed to mirror the actual exam’s format and difficulty.
What makes NotJustExam.com stand out is our focus on interactive logic and the accuracy of our explanations. We don’t just provide a list of questions; we provide a high-quality learning experience. Every question in our bank includes an in-depth, accurate explanation that helps you understand the technical reasoning behind the correct AI solution. This ensures that you are truly learning the material and building the confidence needed to succeed on the exam. Our content is regularly updated by subject matter experts to reflect the latest Azure AI features and development trends. With NotJustExam.com, you can approach your AI-102 exam with the assurance that comes from thorough, high-quality preparation. Start your journey toward becoming a Certified Azure AI Engineer today with us!
Free [Microsoft] AI-102 - Azure AI Engineer Associate Practice Questions Preview
-
Question 1
DRAG DROP -
You have 100 chatbots that each has its own Language Understanding model.
Frequently, you must add the same phrases to each model.
You need to programmatically update the Language Understanding models to include the new phrases.
How should you complete the code? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all.
You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:

Correct Answer:
See interactive view.
Explanation:
Based on the question, answer, and discussion content, the recommended answer is:
Box 1: AddPhraseListAsync
Box 2: PhraselistCreateObject
Reasoning:
The objective is to programmatically update Language Understanding models to include new phrases. The AddPhraseListAsync method is designed specifically for adding phrases to a LUIS app's phrase list feature programmatically. The PhraselistCreateObject is the correct object to use when creating a new phrase list, as indicated by the C# code snippet utilizing the new keyword to instantiate the object. Using this method ensures the new phrases are integrated appropriately into the LUIS model.
The Microsoft documentation provides an example of how to use AddPhraseListAsync along with PhraselistCreateObject:
var phraselistId = await client.Features.AddPhraseListAsync(appId, versionId, new PhraselistCreateObject
{
EnabledForAllModels = false,
IsExchangeable = true,
Name = "QuantityPhraselist",
Phrases = "few,more,extra"
});
Reasons for not selecting other options:
- Other options are either not relevant to the task of adding phrases programmatically or are not valid objects/methods within the LUIS SDK for this purpose. Specifically, without knowing the other options displayed in the drag-and-drop interface of the actual exam, it's impossible to address these in detail, but the provided information makes it clear that the only logical choice given the stated intent is
AddPhraseListAsync and PhraselistCreateObject.
Citations:
- Example: Add phraselist feature, https://docs.microsoft.com/en-us/azure/cognitive-services/luis/client-libraries-rest-api
-
Question 2
DRAG DROP -
You plan to use a Language Understanding application named app1 that is deployed to a container.
App1 was developed by using a Language Understanding authoring resource named lu1.
App1 has the versions shown in the following table.

You need to create a container that uses the latest deployable version of app1.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:

Correct Answer:
See interactive view.
Explanation:
The recommended sequence of actions is as follows:
- Select v1.1 of app1. This is because v1.1 is the latest version of the application that has been trained, making it the most up-to-date and suitable version for deployment. The question specifies using the "latest deployable version," which aligns with using the latest *trained* version.
- Export the model using the Export for containers (GZIP) option. This step is necessary to package the selected model (v1.1) in a format that is compatible with the Language Understanding container. The "Export for containers (GZIP)" option specifically prepares the model for containerization.
- Run a container and mount the model file. This is the final step where the exported model is utilized within a container environment. Mounting the model file makes the trained application accessible to the container, allowing it to process language understanding requests.
Reasoning:
The key to answering this question correctly is to identify v1.1 as the latest trained version and to follow the standard procedure for deploying a LUIS model to a container.
- Selecting v1.1 is crucial because the problem statement specifies using the "latest deployable version." Since v1.1 is the latest trained version available, this is the correct selection.
- Exporting the model for containers using GZIP is the appropriate way to package the model so that it can be used inside the container environment.
- Running the container and mounting the model file makes the LUIS application functional within the container, allowing it to process requests.
Why other answers are not correct:
- Selecting and exporting v1.0 would be incorrect because it's an older version and the problem asks for the latest.
- Selecting LUIS v3 API or performing other actions not directly related to preparing and deploying the latest trained model to a container are out of context to the question.
Citation:
- LUIS Container How-to: https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-container-howto
-
Question 3
You need to build a chatbot that meets the following requirements:
✑ Supports chit-chat, knowledge base, and multilingual models
✑ Performs sentiment analysis on user messages
✑ Selects the best language model automatically
What should you integrate into the chatbot?
- A. QnA Maker, Language Understanding, and Dispatch
- B. Translator, Speech, and Dispatch
- C. Language Understanding, Text Analytics, and QnA Maker
- D. Text Analytics, Translator, and Dispatch
Correct Answer:
C
Explanation:
The recommended answer is C (Language Understanding, Text Analytics, and QnA Maker). The detailed reasoning is as follows:
Reasoning for choosing Option C:
- Comprehensive Feature Coverage: Option C encompasses the necessary services to address all requirements. Specifically, Language Understanding (LUIS) handles natural language understanding, Text Analytics performs sentiment analysis, and QnA Maker deals with the knowledge base and chit-chat aspects.
- Sentiment Analysis: The question explicitly requires sentiment analysis. Text Analytics service is specifically designed to provide sentiment analysis, which aligns directly with this requirement.
- Knowledge Base and Chit-chat: QnA Maker is designed to create a conversational layer over your data. It can handle everything from basic FAQs to more complex conversations, thus satisfying the knowledge base and chit-chat requirement.
Reasoning for excluding other options:
- Option A (QnA Maker, Language Understanding, and Dispatch): This option might seem initially appealing because it includes Dispatch. However, while Dispatch aids in routing user input to the appropriate service, it doesn't inherently perform sentiment analysis. Without sentiment analysis directly specified, this solution is incomplete. Additionally, modern LUIS features can often handle intent routing, reducing the need for a separate Dispatch service in simple use cases. The primary drawback is the lack of explicit sentiment analysis capability.
- Option B (Translator, Speech, and Dispatch): This option does not address the knowledge base or sentiment analysis requirements. Translator is for language translation, Speech is for converting speech to text or vice versa, and Dispatch is for routing. None of these addresses the core requirements of a chatbot with Q&A and sentiment analysis capabilities.
- Option D (Text Analytics, Translator, and Dispatch): While this includes sentiment analysis via Text Analytics and Dispatch for routing, it lacks the capability to manage the chatbot’s knowledge base and the natural language understanding aspect, both of which are critical for a robust chatbot. It replaces the crucial Language Understanding and QnA Maker components with translation services, which are not central to the stated requirements.
Based on the requirements, a chatbot needs to understand user input, determine sentiment, and provide relevant answers from a knowledge base. Option C provides the most complete set of services to fulfill these requirements efficiently.
Citations:
- Azure Text Analytics Documentation: https://learn.microsoft.com/en-us/azure/cognitive-services/language-service/sentiment-opinion-mining/overview
- Azure Language Understanding (LUIS) Documentation: https://learn.microsoft.com/en-us/azure/cognitive-services/luis/what-is-luis
- Azure QnA Maker Documentation: https://learn.microsoft.com/en-us/azure/cognitive-services/qna-maker/overview/
- Azure Dispatch: https://learn.microsoft.com/en-us/azure/cognitive-services/luis/dispatch-create
-
Question 4
Your company wants to reduce how long it takes for employees to log receipts in expense reports. All the receipts are in English.
You need to extract top-level information from the receipts, such as the vendor and the transaction total. The solution must minimize development effort.
Which Azure service should you use?
- A. Custom Vision
- B. Personalizer
- C. Form Recognizer
- D. Computer Vision
Correct Answer:
C
Explanation:
The recommended answer is C. Form Recognizer (now known as Azure AI Document Intelligence). This is the most suitable Azure service for extracting information from receipts with minimal development effort.
Reasoning:
Azure AI Document Intelligence (formerly Form Recognizer) is specifically designed for Optical Character Recognition (OCR) and extracting structured data from documents like receipts and invoices. It comes with pre-built models that are already trained to recognize common fields found on receipts, such as vendor names, transaction totals, and dates. This significantly reduces the amount of custom development needed. By using a pre-built model, your company can quickly and efficiently extract the required information to reduce the time employees spend on expense reports.
Reasons for not choosing the other answers:
- A. Custom Vision: Custom Vision is used for image classification and object detection. While it can be trained to identify receipts, it doesn't automatically extract specific data fields like vendor or total. It would require significant effort to train a model to accurately identify and extract the necessary information, making it less efficient than Form Recognizer. Custom Vision requires training the model by labelling images.This is not the out-of-box functionality we seek.
- B. Personalizer: Personalizer is a service for creating personalized user experiences. It's not relevant to extracting data from receipts.
- D. Computer Vision: Computer Vision offers broader image analysis capabilities, including OCR. However, unlike Form Recognizer, it doesn't provide pre-built models specifically for receipts. Using Computer Vision would require more development effort to implement custom logic for identifying and extracting the required fields, making it a less efficient and cost-effective solution compared to using Form Recognizer. You would need to write a lot of code to extract the information compared to Form Recognizer.
Citations:
-
Question 5
HOTSPOT -
You need to create a new resource that will be used to perform sentiment analysis and optical character recognition (OCR). The solution must meet the following requirements:
✑ Use a single key and endpoint to access multiple services.
✑ Consolidate billing for future services that you might use.
✑ Support the use of Computer Vision in the future.
How should you complete the HTTP request to create the new resource? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Correct Answer:
See interactive view.
Explanation:
The correct HTTP request to create the new resource is using PUT and specifying the resource provider as CognitiveServices.
Reasoning:
1. PUT: In Azure, PUT is used to create or update a resource. Given that the question specifies the creation of a new resource, PUT is the appropriate HTTP method. PUT is idempotent, meaning that making the same request multiple times will produce the same result, which is ideal for creating resources.
2. CognitiveServices: The question requires sentiment analysis and OCR capabilities and wants to consolidate billing and support future services, including Computer Vision. All of these capabilities are offered within Azure Cognitive Services. Using the CognitiveServices resource provider allows utilising multiple services through a single endpoint and key and consolidating billing.
Reasons for not choosing other options:
- PATCH is typically used for updating an existing resource, not creating a new one.
- Using a different resource provider would not meet the requirements for sentiment analysis, OCR, and support for future Computer Vision services under a single key and consolidated billing, as efficiently as Cognitive Services does.
- While POST can create resources, PUT is preferred in this scenario because it is idempotent.
-
Question 6
You are developing a new sales system that will process the video and text from a public-facing website.
You plan to monitor the sales system to ensure that it provides equitable results regardless of the user's location or background.
Which two responsible AI principles provide guidance to meet the monitoring requirements? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
- A. transparency
- B. fairness
- C. inclusiveness
- D. reliability and safety
- E. privacy and security
Correct Answer:
BC
Explanation:
The recommended answer is B (Fairness) and C (Inclusiveness).
Reasoning: The question emphasizes monitoring the sales system to ensure equitable results regardless of the user's location or background.
- Fairness (B) directly addresses the need to avoid discrimination and ensure that the system provides equitable outcomes for all users. Monitoring for fairness involves identifying and mitigating potential biases in the system's outputs.
- Inclusiveness (C) ensures that the AI system considers and works effectively for all users, regardless of their backgrounds or characteristics. Monitoring for inclusiveness involves assessing the system's performance across different demographic groups to identify and address any disparities.
These two principles directly align with the monitoring requirements outlined in the problem statement.
Reasons for not choosing other options:
- Transparency (A): While transparency is important in responsible AI, the primary focus of the question is on ensuring equitable results, not on understanding the inner workings of the AI system. Transparency might be a supporting principle, but fairness and inclusiveness are the more direct and relevant choices.
- Reliability and Safety (D): Reliability and safety are crucial for any system, but the scenario specifies monitoring for *equitable results*. While a reliable system is important, it doesn't directly address the core requirement of fairness and inclusiveness.
- Privacy and Security (E): Privacy and security are also important, but not directly related to the requirement of ensuring equitable outcomes regardless of user background. The question focuses on the *results* produced by the AI system, not the protection of user data.
Citations:
- Microsoft Responsible AI Principles, https://www.microsoft.com/en-us/ai/responsible-ai?activetab=pivot1%3atab1
-
Question 7
DRAG DROP -
You plan to use containerized versions of the Anomaly Detector API on local devices for testing and in on-premises datacenters.
You need to ensure that the containerized deployments meet the following requirements:
✑ Prevent billing and API information from being stored in the command-line histories of the devices that run the container.
✑ Control access to the container images by using Azure role-based access control (Azure RBAC).
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select.
Select and Place:

Correct Answer:
See interactive view.
Explanation:
The recommended answer, based on the requirements of preventing sensitive information from being stored in command-line histories and controlling access via Azure RBAC, is the following sequence: 1. Create a custom Dockerfile, 2. Pull the Anomaly Detector Container Image, 3. Build the image, and 4. Push the image to an Azure Container Registry (ACR).
Reasoning:
1. Create a custom Dockerfile: This allows you to define the specific configurations, environment variables, and steps required to run the Anomaly Detector container. This is the first step in customizing the deployment to meet security and access control requirements.
2. Pull the Anomaly Detector Container Image: By including the image pull instruction (e.g., `FROM mcr.microsoft.com/azure-cognitive-services/anomaly-detector:latest`) inside the Dockerfile, you avoid exposing the API keys or billing information directly in the command-line history.
3. Build the image: Building the docker image from the Dockerfile consolidates the pull instruction and any additional customization into a single deployable unit.
4. Push the image to an Azure Container Registry (ACR): Pushing the built image to ACR enables you to leverage Azure RBAC to control access to the container image. This satisfies the requirement of managing access to the container images securely.
Reasons for not including other options and their problems:
- Distribute the docker run script: Distributing a `docker run` script can be problematic because it often includes environment variables or command-line arguments that expose sensitive API keys and billing information. Storing these credentials directly in a script defeats the purpose of preventing them from being stored in command-line histories. This does not prevent storing API key, billing information in devices
By embedding the image pull instruction within the Dockerfile and leveraging ACR with Azure RBAC, you ensure that sensitive information is not stored in command-line histories and that access to the container images is securely controlled.
- Azure Container Registry Documentation - Microsoft, https://docs.microsoft.com/en-us/azure/container-registry/container-registry-intro
-
Question 8
HOTSPOT -
You plan to deploy a containerized version of an Azure Cognitive Services service that will be used for text analysis.
You configure https://contoso.cognitiveservices.azure.com as the endpoint URI for the service, and you pull the latest version of the Text Analytics
Sentiment Analysis container.
You need to run the container on an Azure virtual machine by using Docker.
How should you complete the command? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Correct Answer:
See interactive view.
Explanation:
Based on the question and discussion, the correct way to complete the command for running the Sentiment Analysis container on an Azure virtual machine using Docker involves specifying the correct container image and the endpoint URI.
The first box should contain mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment because this is the official Microsoft Container Registry (MCR) path for the Text Analytics Sentiment Analysis container.
The second box should contain https://contoso.cognitiveservices.azure.com because this is the endpoint URI configured for the Azure Cognitive Services resource as specified in the question. This URI is essential for authenticating and routing requests to the correct Cognitive Services instance.
Reasoning:
The container image mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment is the correct image to use for performing sentiment analysis as provided by Microsoft in their container registry. Using the wrong image would mean the container couldn't perform the intended text analysis. The endpoint https://contoso.cognitiveservices.azure.com is the specific endpoint given in the problem, and it has to be specified so that the container knows the billing and authentication information.
It's important to note that adding "/sentiment" to the endpoint isn't necessary here, as the container itself handles the sentiment analysis routing internally, and the base endpoint will be sufficient for the container to function. The main reason is that the base endpoint addresses the Cognitive Service, the containerized application knows where `/sentiment` functionality resides and is able to route correctly.
Why other options are not correct:
Choosing an incorrect container image wouldn't allow the Docker container to run the Sentiment Analysis service correctly, and therefore the application would not function as intended. Using the wrong endpoint URI would cause authentication failures and prevent the container from properly accessing the Cognitive Services account. Omitting the URI or providing an incorrect one would result in the container failing to connect.
Suggested Answer:
Box 1: mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment
Box 2: https://contoso.cognitiveservices.azure.com
-
Question 9
You have the following C# method for creating Azure Cognitive Services resources programmatically.

You need to call the method to create a free Azure resource in the West US Azure region. The resource will be used to generate captions of images automatically.
Which code should you use?
- A. create_resource(client, "res1", "ComputerVision", "F0", "westus")
- B. create_resource(client, "res1", "CustomVision.Prediction", "F0", "westus")
- C. create_resource(client, "res1", "ComputerVision", "S0", "westus")
- D. create_resource(client, "res1", "CustomVision.Prediction", "S0", "westus")
Correct Answer:
A
Explanation:
The recommended answer is A. create_resource(client, "res1", "ComputerVision", "F0", "westus").
Reasoning:
The question specifies the need to create a free Azure resource in the West US region specifically for generating image captions. The method's parameters are (client, resource name, service type, pricing tier, region). Computer Vision is the correct service for generating image captions automatically, and "F0" represents the free tier. Thus the combination of Computer Vision and F0 is most suitable.
Reasons for not choosing the other answers:
- B. create_resource(client, "res1", "CustomVision.Prediction", "F0", "westus"): Custom Vision is primarily designed for training custom image classifiers, not for generating image captions. While it does offer prediction capabilities, it isn't the correct service for the described task.
- C. create_resource(client, "res1", "ComputerVision", "S0", "westus"): While Computer Vision is the correct service, "S0" represents the standard tier, which is not free. The question explicitly requires a free resource.
- D. create_resource(client, "res1", "CustomVision.Prediction", "S0", "westus"): This option combines the incorrect service (Custom Vision) with a paid tier ("S0").
Citations:
- Azure Cognitive Services Pricing: https://azure.microsoft.com/en-us/pricing/details/cognitive-services/
- Computer Vision Overview: https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview
- Custom Vision Overview: https://learn.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/overview
-
Question 10
You successfully run the following HTTP request.
POST https://management.azure.com/subscriptions/18c51a87-3a69-47a8-aedc-a54745f708a1/resourceGroups/RG1/providers/
Microsoft.CognitiveServices/accounts/contoso1/regenerateKey?api-version=2017-04-18
Body{"keyName": "Key2"}
What is the result of the request?
- A. A key for Azure Cognitive Services was generated in Azure Key Vault.
- B. A new query key was generated.
- C. The primary subscription key and the secondary subscription key were rotated.
- D. The secondary subscription key was reset.
Correct Answer:
D
Explanation:
The most appropriate answer is D. The secondary subscription key was reset.
Reasoning: The provided HTTP request targets the /regenerateKey endpoint of an Azure Cognitive Services account with "keyName": "Key2" in the request body. According to Microsoft's documentation on the regenerateKey API, specifying "Key2" will regenerate the secondary key for the Cognitive Services account. This means Key2 (the secondary subscription key) is reset and a new key is generated for it.
Why other options are incorrect:
- A. A key for Azure Cognitive Services was generated in Azure Key Vault: This is incorrect because the
/regenerateKey API directly regenerates the keys associated with the Cognitive Services account itself, not in Azure Key Vault. While Cognitive Services *can* integrate with Key Vault, this specific operation does not involve it.
- B. A new query key was generated: This is incorrect because "query keys" are typically associated with Azure Cognitive Search. This regenerateKey API is used for Cognitive Services account keys for accessing services like Vision, Speech, Language, etc., not Search services. The resource provider mentioned (Microsoft.CognitiveServices) indicates general Cognitive Services, not a specific search service.
- C. The primary subscription key and the secondary subscription key were rotated: This is incorrect because the request explicitly specifies only Key2 is to be regenerated through the body parameter
{"keyName": "Key2"}. It does not rotate both keys.
Supporting Citations:
- REST API to regenerate keys, https://learn.microsoft.com/en-us/rest/api/cognitiveservices/accounts/regenerate-key