[Microsoft] DP-420 - Azure Cosmos DB Developer Specialty Exam Dumps & Study Guide
The Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB (DP-420) is the premier certification for data professionals who design and manage cloud-native applications using Azure Cosmos DB. As organizations increasingly adopt NoSQL databases to drive their high-performance and scalable applications, the ability to design and implement robust, globally distributed, and secure data solutions has become a highly sought-after skill. The DP-420 validates your specialist-level knowledge of Azure Cosmos DB, including its various APIs and features. It is an essential credential for any professional looking to lead in the age of modern cloud-native development.
Overview of the Exam
The DP-420 exam is a rigorous assessment that covers the design and implementation of applications using Azure Cosmos DB. It is a 120-minute exam consisting of approximately 40-60 questions. The exam is designed to test your knowledge of Azure Cosmos DB technologies and your ability to apply them to real-world development scenarios. From data modeling and partitioning to query optimization, indexing, and consistency, the DP-420 ensures that you have the skills necessary to build and maintain robust cloud-native applications. Achieving the DP-420 certification proves that you are a highly skilled professional who can handle the technical demands of enterprise-grade NoSQL database design.
Target Audience
The DP-420 is intended for developers and data professionals who have a solid understanding of Azure Cosmos DB and NoSQL databases. It is ideal for individuals in roles such as:
1. Cloud-Native Application Developers
2. Data Engineers
3. Solutions Architects
4. Database Administrators
To be successful, candidates should have at least three to five years of experience in enterprise-grade development and a thorough understanding of the Azure Cosmos DB platform and its features.
Key Topics Covered
The DP-420 exam is organized into several main domains:
1. Design and Implement Data Models (35-40%): Designing and implementing effective data models and partitioning strategies for Azure Cosmos DB.
2. Design and Implement Data Distribution (5-10%): Designing and implementing global data distribution and replication solutions.
3. Integrate an Azure Cosmos DB Solution (5-10%): Integrating Azure Cosmos DB with other Azure services and applications.
4. Optimize an Azure Cosmos DB Solution (15-20%): Optimizing query performance, indexing, and cost for Azure Cosmos DB.
5. Maintain an Azure Cosmos DB Solution (25-30%): Implementing security, monitoring, and backup/restore features for Azure Cosmos DB.
Benefits of Getting Certified
Earning the DP-420 certification provides several significant benefits. First, it offers industry recognition of your specialized expertise in Microsoft's cloud-native database technologies. As a leader in the cloud industry, Microsoft skills are in high demand across the globe. Second, it can lead to increased career opportunities and higher salary potential in a variety of roles. Third, it demonstrates your commitment to professional excellence and your dedication to staying current with the latest development practices. By holding this certification, you join a global community of Microsoft professionals and gain access to exclusive resources and continuing education opportunities.
Why Choose NotJustExam.com for Your DP-420 Prep?
The DP-420 exam is challenging and requires a deep understanding of Azure Cosmos DB's complex features. NotJustExam.com is the best resource to help you master this material. Our platform offers an extensive bank of practice questions that are designed to mirror the actual exam’s format and difficulty.
What makes NotJustExam.com stand out is our focus on interactive logic and the accuracy of our explanations. We don’t just provide a list of questions; we provide a high-quality learning experience. Every question in our bank includes an in-depth, accurate explanation that helps you understand the technical reasoning behind the correct database solutions. This ensures that you are truly learning the material and building the confidence needed to succeed on the exam. Our content is regularly updated to reflect the latest Microsoft features and exam updates. With NotJustExam.com, you can approach your DP-420 exam with the assurance that comes from thorough, high-quality preparation. Start your journey toward becoming a Certified Cosmos DB Specialist today with us!
Free [Microsoft] DP-420 - Azure Cosmos DB Developer Specialty Practice Questions Preview
-
Question 1
HOTSPOT -
You have an Azure Cosmos DB Core (SQL) API account named account1 that has the disableKeyBasedMetadataWriteAccess property enabled.
You are developing an app named App1 that will be used by a user named DevUser1 to create containers in account1. DevUser1 has a non-privileged user account in the Azure Active Directory (Azure AD) tenant.
You need to ensure that DevUser1 can use App1 to create containers in account1.
What should you do? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Correct Answer:
See interactive view.
Explanation:
The AI recommends changing the suggested answer to Role-Based Access Control (RBAC) and Azure Resource Manager (ARM) API.
The reason for this recommendation is that RBAC is the correct mechanism for granting permissions to create containers, particularly when disableKeyBasedMetadataWriteAccess is enabled. RBAC allows you to assign specific roles to DevUser1, granting them the necessary permissions to create containers within account1. Using the Azure Resource Manager API is the correct method to interact with Azure to manage and deploy resources like Cosmos DB containers.
Resource tokens are intended for granting access to data *within* a container, not for creating containers themselves. The SQL (Core) API is used for interacting with data within Cosmos DB, not for managing the Cosmos DB account or creating containers. Account keys are disabled by the disableKeyBasedMetadataWriteAccess property, and Azure AD Graph API is used for accessing Azure AD resources, not Cosmos DB. Therefore, these options are not appropriate for the scenario described.
- Suggested Answer Selection: Box 1: Resource tokens, Box 2: Azure Resource Manager API
- AI Recommended Answer Selection: Box 1: Role-based access control (RBAC), Box 2: Azure Resource Manager API
Reasoning:
- Role-Based Access Control (RBAC): RBAC is the appropriate mechanism for managing access to Azure resources, including the ability to create Cosmos DB containers. By assigning appropriate roles to DevUser1, such as the "Cosmos DB Account Contributor" role, you grant the user the necessary permissions without using account keys, which are disabled in the scenario.
- Azure Resource Manager API: The Azure Resource Manager API is the correct API to use for managing Azure resources programmatically, including creating, updating, and deleting Cosmos DB containers.
Reasons for not selecting other options:
- Account Keys: The scenario explicitly states that `disableKeyBasedMetadataWriteAccess` is enabled, which means account keys cannot be used to create containers.
- Resource Tokens: Resource tokens are used to grant access to data *within* a Cosmos DB container, not to create containers themselves. They are for fine-grained access control at the data level.
- Azure AD Graph API: The Azure AD Graph API is used for interacting with Azure Active Directory, not for managing Cosmos DB resources.
- SQL (Core) API: The SQL (Core) API is used for querying and manipulating data *within* Cosmos DB containers, not for managing the containers themselves.
Citations:
- Azure Cosmos DB Role-Based Access Control, https://learn.microsoft.com/en-us/azure/cosmos-db/how-to-setup-rbac
- Azure Resource Manager overview, https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/overview
-
Question 2
HOTSPOT -
You have an Azure Cosmos DB Core (SQL) account that has a single write region in West Europe.
You run the following Azure CLI script.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:

Correct Answer:
See interactive view.
Explanation:
The AI suggests that the answer should be Yes, No, No (YNN).
Reasoning:
The Azure CLI script provided configures the failover priorities for an Azure Cosmos DB account. Let's analyze each statement:
- Statement 1: After you run the script, North Europe will be a writable region.
- The script sets the failover priority of North Europe to 0. Failover priority 0 indicates the highest priority, making it the primary write region. Therefore, this statement is TRUE.
- Statement 2: After you run the script, West Europe will be used for failover.
- The script changes the failover priority. Originally, West Europe was the single write region. By assigning North Europe a higher priority (0), West Europe effectively becomes a read region and a failover candidate, and North Europe becomes the primary writable region. So this statement is FALSE.
- Statement 3: After you run the script, the account cost will increase.
- The script only modifies the failover priority configuration. It doesn't enable multi-region writes or change the provisioned throughput. Therefore, the cost will not inherently increase simply by changing the failover priority order if the account remains single-write region. If the account has single write region the cost should be the same. So this statement is FALSE.
Therefore, based on the script's behavior and the impact of failover priority on region roles and costs, the suggested answer of YNN is correct.
Why the original suggested answer is not correct:
- The original answer correctly identifies North Europe as the new writable region and West Europe as being used for failover after the script executes.
- However, it incorrectly claims that the account cost will increase after running the script. Modifying failover priorities alone does not directly increase costs. Cost increases typically occur when enabling multi-region writes or increasing provisioned throughput.
- Automatic failover, https://learn.microsoft.com/en-us/azure/cosmos-db/high-availability
- Multi-region writes, https://learn.microsoft.com/en-us/azure/cosmos-db/how-to-multi-master
- Cost optimization for regions, https://learn.microsoft.com/en-us/azure/cosmos-db/optimize-cost-regions
-
Question 3
Correct Answer:
BD
Explanation:
The AI agrees with the suggested answer of BD.
Reasoning:
Both queries B and D will return the top five most ordered fruits based on the 'Orders' field in descending order, which is what the question requires.
- Query B correctly uses SELECT TOP 5 to limit the results to the top five and orders them by c.Orders DESC.
- Query D also correctly retrieves the top 5 items in descending order, using ORDER BY c.Orders DESC OFFSET 0 LIMIT 5.
Reasons for excluding other options:
- Option A is incorrect because it sorts the result in ascending order (ORDER BY c.Orders ASC), which would return the five least ordered fruits, not the most ordered.
- Option C is incorrect because it attempts to order by the 'Type' field (ORDER BY c.Type DESC) instead of the 'Orders' field, and it does not limit the results to the top five.
This approach aligns with the requirements of the question.
Citations:
- Azure Cosmos DB SQL query ORDER BY clause, https://learn.microsoft.com/en-us/azure/cosmos-db/sql/sql-query-order-by
- Azure Cosmos DB SQL query TOP clause, https://learn.microsoft.com/en-us/azure/cosmos-db/sql/sql-query-top
- Azure Cosmos DB SQL query OFFSET LIMIT clause, https://learn.microsoft.com/en-us/azure/cosmos-db/sql/sql-query-offset-limit
-
Question 4
HOTSPOT -
You have a database in an Azure Cosmos DB Core (SQL) API account.
You plan to create a container that will store employee data for 5,000 small businesses. Each business will have up to 25 employees. Each employee item will have an emailAddress value.
You need to ensure that the emailAddress value for each employee within the same company is unique.
To what should you set the partition key and the unique key? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Correct Answer:
See interactive view.
Explanation:
The AI agrees with the suggested answer.
The question requires ensuring the uniqueness of `emailAddress` values for employees within the same company. To achieve this in Azure Cosmos DB, the partition key and unique key must be configured correctly.
Here's a breakdown of why the suggested answer is correct:
- Box 1: CompanyID - The partition key should be set to `CompanyID`.
Reasoning: Partitioning by `CompanyID` ensures that all employee data for a given company resides within the same logical partition. This is crucial for enforcing the unique key constraint. When querying data, filtering by `CompanyID` will be efficient because it will be a targeted query within a single partition, improving query performance. The problem also explicitly said "...each employee within the same company...".
- Box 2: emailAddress - The unique key should be set to `emailAddress`.
Reasoning: A unique key policy enforces uniqueness of the specified property within each logical partition. In this case, setting `emailAddress` as the unique key, combined with `CompanyID` as the partition key, guarantees that each employee within a company has a unique email address. Attempting to insert or update an item with a duplicate `emailAddress` within the same `CompanyID` partition will fail, thus enforcing the requirement.
The official Microsoft documentation confirms this approach.
Other considerations:
- Using `emailAddress` as the partition key is not a viable option. This would distribute employee data across many partitions, making it impossible to enforce uniqueness within a company. Also this will lead to cross-partition queries when needing to query against CompanyId.
- Not using a unique key would not prevent duplicate email addresses within the same company, failing to meet the primary requirement.
In summary, the suggested answer of setting `CompanyID` as the partition key and `emailAddress` as the unique key is correct because it allows efficient querying by company and enforces the required uniqueness constraint. Other options would either hinder query performance or fail to enforce the uniqueness requirement.
Citations:
- Unique keys in Azure Cosmos DB, https://docs.microsoft.com/en-us/azure/cosmos-db/unique-keys
-
Question 5
HOTSPOT -
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account. The container1 container has 120 GB of data.
The following is a sample of a document in container1.

The orderId property is used as the partition key.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:

Correct Answer:
See interactive view.
Explanation:
The AI agrees with the suggested answer, which is Yes-Yes-No.
The reasoning for this answer is as follows:
- Box 1: Yes - The query `SELECT * FROM c WHERE c.city = "Seattle"` does not include the partition key (orderId). Therefore, it must perform a cross-partition query, scanning all partitions to find the records where the city is "Seattle".
- Box 2: Yes - Similarly, the query `SELECT * FROM c WHERE c.zipCode = 98052` also does not include the partition key (orderId). This query requires a cross-partition scan to find all documents with the zipCode 98052.
- Box 3: No - The query `SELECT * FROM c WHERE c.orderId = "ord-1"` includes the partition key (orderId). This query will be scoped to a single partition, since it is filtered by the partition key value. It will only scan the partition where `orderId` is "ord-1" and will not run as a cross-partition query.
The reason for choosing this answer is because it accurately reflects how Azure Cosmos DB handles queries with and without the partition key. Queries lacking the partition key necessitate scanning all partitions (cross-partition query), while queries specifying the partition key are scoped to a single partition.
The reason for not choosing any other answer is that altering any of the Yes/No selections would contradict the fundamental principles of partition key usage in Cosmos DB query execution. Omitting the partition key always results in a cross-partition query, while its inclusion confines the query to a specific partition.
Citations:
- Partitioning in Azure Cosmos DB, https://learn.microsoft.com/en-us/azure/cosmos-db/partitioning
- Querying across partitions, https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-query-container#cross-partition-queries
-
Question 6
You are designing an Azure Cosmos DB Core (SQL) API solution to store data from IoT devices. Writes from the devices will be occur every second.
The following is a sample of the data.

You need to select a partition key that meets the following requirements for writes:
✑ Minimizes the partition skew
✑ Avoids capacity limits
✑ Avoids hot partitions
What should you do?
- A. Use timestamp as the partition key.
- B. Create a new synthetic key that contains deviceId and sensor1Value.
- C. Create a new synthetic key that contains deviceId and deviceManufacturer.
- D. Create a new synthetic key that contains deviceId and a random number.
Correct Answer:
D
Explanation:
The AI recommends answer D, which suggests creating a new synthetic key that contains deviceId and a random number.
Reasoning:
The primary goal is to minimize partition skew, avoid capacity limits, and prevent hot partitions, given high-frequency writes from IoT devices. The suggested answer (D) effectively addresses these requirements:
- Minimizes partition skew and avoids hot partitions: By combining the deviceId with a random number, the writes for each device are distributed across multiple partitions. This prevents any single partition from becoming a "hot partition" due to high write volume from a specific device. The randomness ensures a more even distribution of data across partitions.
- Avoids capacity limits: Spreading the data across multiple partitions also helps to avoid hitting the storage or throughput limits of a single partition.
Why other options are less suitable:
- A (Use timestamp as the partition key): Using timestamp alone is a poor choice because all writes at the same timestamp would go to the same partition, creating a hot partition. IoT devices often generate data at similar times, exacerbating this issue.
- B (Create a new synthetic key that contains deviceId and sensor1Value): This is not ideal as sensor values might be similar for many devices at any given time, leading to potential hot partitions if a specific sensor value becomes prevalent. The cardinality might be low, causing uneven distribution.
- C (Create a new synthetic key that contains deviceId and deviceManufacturer): Device manufacturer has very low cardinality. All devices from the same manufacturer would be directed to the same partition. This will cause hot partitions and is not scalable.
While the discussion mentions that the random number approach (D) might create excessive partitions, the benefits of avoiding hot partitions and distributing write load outweigh this concern, especially considering the prompt specifies writes occurring every second. Proper management and monitoring of partition count would mitigate any potential issues.
In summary, the selection of deviceId combined with a random number is to create a synthetic key to distribute writes across multiple partitions, which is a better approach than the other options given the requirement to minimize partition skew, avoid capacity limits, and avoid hot partitions.
Citations:
- Partitioning and horizontal scaling in Azure Cosmos DB, https://learn.microsoft.com/en-us/azure/cosmos-db/partitioning-overview
-
Question 7
You maintain a relational database for a book publisher. The database contains the following tables.

The most common query lists the books for a given authorId.
You need to develop a non-relational data model for Azure Cosmos DB Core (SQL) API that will replace the relational database. The solution must minimize latency and read operation costs.
What should you include in the solution?
- A. Create a container for Author and a container for Book. In each Author document, embed bookId for each book by the author. In each Book document embed authorId of each author.
- B. Create Author, Book, and Bookauthorlnk documents in the same container.
- C. Create a container that contains a document for each Author and a document for each Book. In each Book document, embed authorId.
- D. Create a container for Author and a container for Book. In each Author document and Book document embed the data from Bookauthorlnk.
Correct Answer:
C
Explanation:
The AI agrees with the suggested answer (C).
The best solution for minimizing latency and read operation costs in Azure Cosmos DB when querying books by author is to create a single container containing both Author and Book documents, embedding the authorId within each Book document.
This approach optimizes for the most common query (listing books by author) by denormalizing the data and avoiding joins across containers. By embedding the `authorId` in the `Book` document, a single query on the container can retrieve all books for a given author.
Reasoning:
- Reduced Latency and Cost: Embedding `authorId` in `Book` documents allows retrieving all books for an author with a single query, minimizing latency and read operation costs. This is more efficient than performing joins or multiple queries.
- Optimized for Common Query: The data model is tailored to the most frequent query (listing books by author), aligning with Cosmos DB's recommendation to optimize for read-heavy scenarios.
- Denormalization: This strategy embraces denormalization, which is common in NoSQL databases like Cosmos DB, to improve read performance.
Reasons for not choosing other answers:
- Option A: Creating separate containers for Author and Book documents and embedding `bookId` in Author and `authorId` in Book introduces unnecessary complexity and overhead. Retrieving all books for an author would require querying the Author container to get the `bookId`s, and then querying the Book container for each `bookId`. This increases latency and RU consumption.
- Option B: Creating separate documents for Author, Book, and Bookauthorlnk in the same container resembles a relational model and negates the benefits of Cosmos DB's denormalized approach. Querying books by author would still require "joining" these document types, incurring higher costs and latency.
- Option D: Creating separate containers for Author and Book and embedding data from Bookauthorlnk in both documents duplicates data unnecessarily and requires maintaining consistency across containers. It doesn't provide significant performance advantages over Option C and adds complexity.
The choice to embed `authorId` within the `Book` document directly supports the most common query and aligns with best practices for Cosmos DB data modeling, leading to a more efficient and cost-effective solution.
- Title: Azure Cosmos DB data modeling, https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/modeling-data
- Title: Model data in Azure Cosmos DB, https://learn.microsoft.com/en-us/training/modules/cosmos-db-model-data/
-
Question 8
You have an Azure Cosmos DB Core (SQL) API account.
You run the following query against a container in the account.

What is the output of the query?
- A. [{"A": false, "B": true, "C": false}]
- B. [{"A": true, "B": false, "C": true}]
- C. [{"A": true, "B": true, "C": false}]
- D. [{"A": true, "B": true, "C": true}]
Correct Answer:
A
Explanation:
The AI agrees with the suggested answer of A.
The provided query uses the IS_NUMBER function to check if string values can be parsed as numbers. Based on the query, "A" is '20', "B" is '20.5', and "C" is '20.5a'.
- IS_NUMBER('20') returns true, so the condition `"A" = IS_NUMBER('20')` evaluates to `"A" = true`, which is false because "A" is '20'.
- IS_NUMBER('20.5') returns true, so the condition `"B" = IS_NUMBER('20.5')` evaluates to `"B" = true`, which is true because "B" is '20.5'.
- IS_NUMBER('20.5a') returns false, so the condition `"C" = IS_NUMBER('20.5a')` evaluates to `"C" = false`, which is true because "C" is '20.5a'.
Thus the result is [{"A": false, "B": true, "C": false}].
The other options are incorrect because they do not correctly evaluate the IS_NUMBER function and string comparisons as described above. Option B, C, and D does not align with the conditions when applying the
IS_NUMBER function in T-SQL.
- Title: IS_NUMBER (Transact-SQL), https://learn.microsoft.com/en-us/sql/t-sql/functions/isnumeric-transact-sql?view=sql-server-ver16
-
Question 9
You need to implement a trigger in Azure Cosmos DB Core (SQL) API that will run before an item is inserted into a container.
Which two actions should you perform to ensure that the trigger runs? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
- A. Append pre to the name of the JavaScript function trigger.
- B. For each create request, set the access condition in RequestOptions.
- C. Register the trigger as a pre-trigger.
- D. For each create request, set the consistency level to session in RequestOptions.
- E. For each create request, set the trigger name in RequestOptions.
Correct Answer:
C
Explanation:
The AI agrees with the suggested answer, but suggests a different answer. The correct answers are C and E.
Reasoning:
To ensure a trigger runs before an item is inserted into a Cosmos DB container, you need to:
- Register the trigger as a pre-trigger: This is essential because it tells Cosmos DB that the trigger should execute before the write operation.
- Set the trigger name in RequestOptions for each create request: This step ensures that the trigger is actually invoked when a new item is created. Without specifying the trigger in the request options, Cosmos DB won't know to execute it.
According to the Microsoft documentation on Triggers, you must register the trigger and then specify it in the RequestOptions when performing the operation.
Why other options are incorrect:
- A: Appending "pre" to the function name is not a recognized way to define a pre-trigger. The registration process handles specifying whether it's a pre- or post-trigger.
- B: Setting access conditions is related to optimistic concurrency and doesn't directly influence trigger execution.
- D: Consistency level settings affect data visibility and consistency, not trigger execution.
In summary, registering the trigger as a pre-trigger and specifying its name in the RequestOptions are crucial steps to ensure its execution before an item is inserted.
-
Question 10
HOTSPOT -
You have a container in an Azure Cosmos DB Core (SQL) API account.
You need to use the Azure Cosmos DB SDK to replace a document by using optimistic concurrency.
What should you include in the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Correct Answer:
See interactive view.
Explanation:
The AI recommends that the suggested answer is incorrect.
Instead, the correct options to implement optimistic concurrency when replacing a document in Azure Cosmos DB using the SDK are AccessCondition and _etag.
The reason for this is that optimistic concurrency is achieved by using the ETag of the document. When you retrieve a document, it has an associated ETag. To update or replace the document using optimistic concurrency, you include the ETag in the request. Cosmos DB then checks if the ETag in the request matches the current ETag of the document in the database. If they match, the operation proceeds; otherwise, it fails, indicating that the document has been modified since you last retrieved it. The AccessCondition allows us to specify the condition based on the ETag.
- Box 1: AccessCondition. The AccessCondition class allows us to specify conditions based on the ETag for optimistic concurrency.
- Box 2: _etag. The _etag property holds the ETag value of the document, which is used in the AccessCondition to verify that the document has not been changed since it was last read.
The original suggested answer has the following problems:
- Using ConsistencyLevel directly does not implement optimistic concurrency. While consistency levels affect read and write operations, they do not provide a mechanism to check if a document has been changed before updating it.
- While _etag is the correct property, it needs to be combined with AccessCondition to properly implement optimistic concurrency.
Based on the documentation and community consensus, the correct way to implement optimistic concurrency is to use the AccessCondition class along with the document's _etag.
The following is the example code for replacing items using Optimistic concurrency :
try
{
// Read the item to get the current ETag
ItemResponse<SalesOrder> readResponse = await this.container.ReadItemAsync<SalesOrder>(id: "9E4425A2-3B2F-4880-B04B-776E8993F633", partitionKey: new PartitionKey("Seattle"));
string etag = readResponse.ETag;
SalesOrder item = readResponse;
// Modify the item
item.ShippingCity = "New York";
// Create the request options with the AccessCondition
ItemRequestOptions requestOptions = new ItemRequestOptions()
{
IfMatchEtag = etag
};
// Replace the item
ItemResponse<SalesOrder> replaceResponse = await this.container.ReplaceItemAsync<SalesOrder>(item, item.id, new PartitionKey(item.ShippingCity), requestOptions);
Console.WriteLine($"Replace Item Status Code: {replaceResponse.StatusCode}");
}
catch (CosmosException ex)
{
Console.WriteLine($"Failed to replace item: {ex.StatusCode}.");
}
Citations:
- Optimistic concurrency - Azure Cosmos DB | Microsoft Learn, https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/database-transactions-optimistic-concurrency
- ItemRequestOptions Class (Microsoft.Azure.Cosmos) , https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.cosmos.itemrequestoptions?view=azure-cosmos-dotnet
- AccessCondition Class (Microsoft.Azure.Cosmos), https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.cosmos.accesscondition?view=azure-cosmos-dotnet