[CISCO] 350-901 - Core Platforms & APIs (DEVCOR) Exam Dumps & Study Guide
The Developing Applications using Cisco Core Platforms and APIs (DEVCOR) 350-901 certification is the core exam for the Cisco Certified DevNet Professional certification track. As the networking industry shifts towards software-defined architectures and automation, the ability to build and manage applications that leverage Cisco's core platforms and APIs has become a critical skill for both developers and network engineers. The 350-901 validates your expertise in implementing network automation, leveraging APIs, and building resilient, secure applications on Cisco infrastructure. It is an essential credential for any professional looking to lead in the age of programmable networks.
Overview of the Exam
The 350-901 exam is a rigorous assessment that covers the development and deployment of applications using Cisco's core platforms and APIs. It is a 120-minute exam consisting of approximately 100 questions. The exam is designed to test your knowledge of software development best practices, network automation, and the various APIs available across Cisco's portfolio. From infrastructure as code (IaC) and containerization to security and monitoring, the 350-901 ensures that you have the skills necessary to build modern, automated network solutions. Achieving the 350-901 certification proves that you are a highly skilled professional who can handle the technical demands of software-defined networking.
Target Audience
The 350-901 is intended for professionals who have a solid understanding of software development and network automation. It is ideal for individuals in roles such as:
1. Network Automation Engineers
2. Software Developers
3. DevOps Engineers
4. Systems Engineers
5. Solutions Architects
To be successful, candidates should have at least three to five years of experience in software development and a thorough understanding of Cisco's networking platforms and APIs.
Key Topics Covered
The 350-901 exam is organized into five main domains:
1. Software Development and Design (20%): Applying software development best practices, including version control, testing, and CI/CD.
2. Using APIs (20%): Leveraging RESTful APIs and authentication mechanisms across Cisco's platforms.
3. Cisco Platforms (20%): Understanding and using APIs for Cisco DNA Center, Cisco SD-WAN, and Cisco Meraki.
4. Application Deployment and Security (20%): Deploying applications using containerization and ensuring application security.
5. Infrastructure and Automation (20%): Implementing infrastructure as code (IaC) and using automation tools like Ansible and Terraform.
Benefits of Getting Certified
Earning the 350-901 certification provides several significant benefits. First, it offers industry recognition of your specialized expertise in Cisco's DevNet technologies. As the demand for network automation and software-defined networking continues to grow, these skills are in high demand across the globe. Second, it can lead to increased career opportunities and higher salary potential in a variety of roles. Third, it demonstrates your commitment to professional excellence and your dedication to staying current with the latest networking and software development practices. By holding this certification, you join a global community of Cisco professionals and gain access to exclusive resources and continuing education opportunities.
Why Choose NotJustExam.com for Your 350-901 Prep?
The 350-901 exam is challenging and requires a deep understanding of Cisco's complex platforms and APIs. NotJustExam.com is the best resource to help you master this material. Our platform offers an extensive bank of practice questions that are designed to mirror the actual exam’s format and difficulty.
What makes NotJustExam.com stand out is our focus on interactive logic and the accuracy of our explanations. We don’t just provide a list of questions; we provide a high-quality learning experience. Every question in our bank includes an in-depth, accurate explanation that helps you understand the technical reasoning behind the correct automation and development solutions. This ensures that you are truly learning the material and building the confidence needed to succeed on the exam. Our content is regularly updated to reflect the latest Cisco features and exam updates. With NotJustExam.com, you can approach your 350-901 exam with the assurance that comes from thorough, high-quality preparation. Start your journey toward becoming a Certified DevNet Professional today with us!
Free [CISCO] 350-901 - Core Platforms & APIs (DEVCOR) Practice Questions Preview
-
Question 1
A developer has created an application based on customer requirements. The customer needs to run the application with the minimum downtime.
Which design approach regarding high-availability applications, Recovery Time Objective, and Recovery Point Objective must be taken?
- A. Active/passive results in lower RTO and RPO. For RPO, data synchronization between the two data centers must be timely to allow seamless request flow.
- B. Active/passive results in lower RTO and RPO. For RPO, data synchronization between the two data centers does not need to be timely to allow seamless request flow.
- C. Active/active results in lower RTO and RPO. For RPO, data synchronization between the two data centers does not need to be timely to allow seamless request flow.
- D. Active/active results in lower RTO and RPO. For RPO, data synchronization between the two data centers must be timely to allow seamless request flow.
Correct Answer:
D
Explanation:
The AI agrees with the suggested answer, D - Active/active results in lower RTO and RPO. For RPO, data synchronization between the two data centers must be timely to allow seamless request flow.
Reasoning:
The question emphasizes the need for *minimum downtime*. An Active/Active setup is designed precisely for this. In an Active/Active configuration, both data centers are simultaneously serving traffic. If one data center fails, the other immediately takes over, resulting in a much lower Recovery Time Objective (RTO) compared to an Active/Passive setup where the passive site needs to be activated. Furthermore, for minimal data loss (Recovery Point Objective - RPO), timely data synchronization between the active data centers is critical. If data synchronization is not timely, a failover could result in data loss.
Why other options are not the best:
* **Option A & B (Active/Passive):** Active/Passive setups inherently have a higher RTO because the passive site needs to be brought online and traffic redirected. While suitable for HA, they don't provide the *minimum* downtime as requested by the question.
* **Option C (Active/Active with no timely data sync):** While Active/Active is the right direction, neglecting timely data synchronization (RPO) introduces the risk of data loss during failover, which is not an ideal design.
Therefore, Active/Active with timely data synchronization is the most appropriate design to achieve the lowest RTO and RPO.
-
Question 2
DRAG DROP -
An application is being built to collect and display telemetry streaming data. Drag and drop the elements of this stack from the left onto the correct element functions on the right.
Select and Place:

Correct Answer:
See interactive view.
Explanation:
The AI agrees with the suggested answer.
Here's a breakdown of the correct mapping with reasoning:
- Data Generator: IOS-XE Device - IOS-XE devices are network devices that can generate telemetry data. They are the source of the data that needs to be collected and displayed.
- Data Collector: Python Application - A Python application is well-suited to collect the streaming telemetry data from the IOS-XE device. Python has libraries that simplify network communication and data handling.
- Datastore: Elasticsearch - Elasticsearch is a distributed, RESTful search and analytics engine capable of storing and indexing large volumes of data. It's a popular choice for storing telemetry data. The primary reason to choose Elasticsearch is its ability to handle large volumes of data and provide fast search and analysis capabilities.
- Virtualization Platform: Kibana - Kibana is a data visualization dashboard for Elasticsearch. It lets you visualize your Elasticsearch data. The key reason to choose Kibana is its integration with Elasticsearch and its ability to create dashboards and visualizations.
Reasoning for Not Choosing Alternative Mappings:
- IOS-XE as Virtualization Platform: While IOS-XE does support containerization (IOx), it is primarily a network operating system. Kibana is designed to visualize data, making it a more appropriate virtualization platform in this context.
- Kibana as Data Collector: Kibana is designed for visualizing data, not collecting it. A Python application is better suited for collecting data from network devices.
Therefore, the correct mapping is:
- Virtualization platform: Kibana
- Data collector: Python application
- Data generator: IOS-XE device
- Datastore: Elasticsearch
-
Question 3
A cloud native project is being worked on in which all source code and dependencies are written in Python, Ruby, and/or JavaScnpt. A change in code triggers a notification to the CI/CD tool to run the CI/CD pipeline.
Which step should be omitted from the pipeline?
- A. Deploy the code to one or more environments, such as staging and/or production.
- B. Build one of more containers that package up code and all its dependencies.
- C. Compile code.
- D. Run automated tests to validate the code.
Correct Answer:
C
Explanation:
The AI agrees with the suggested answer, which is C (Compile code).
The primary reason for selecting 'Compile code' as the step to be omitted is that Python, Ruby, and JavaScript are predominantly interpreted languages. Interpreted languages do not require a compilation step before execution, unlike compiled languages such as C++ or Java (although Java compiles to bytecode, which is then interpreted).
Therefore, in a CI/CD pipeline dealing exclusively with these languages, a compilation stage would be redundant and unnecessary.
Here's why the other options are typically included in a CI/CD pipeline:
- A (Deploy the code to one or more environments, such as staging and/or production): Deployment is a fundamental part of the CI/CD process, ensuring that code changes are moved to the appropriate environments for testing and release.
- B (Build one or more containers that package up code and all its dependencies): Containerization (e.g., using Docker) is a common practice in cloud-native development. It ensures consistency and portability across different environments by packaging the application code along with its dependencies.
- D (Run automated tests to validate the code): Automated testing is crucial for ensuring the quality and reliability of the code. It helps to catch bugs and regressions early in the development cycle.
In summary, since Python, Ruby, and JavaScript are interpreted languages, compilation is not a required step, making option C the correct answer to omit from the CI/CD pipeline.
Citations:
- Interpreted language, https://en.wikipedia.org/wiki/Interpreted_language
- CI/CD, https://www.redhat.com/en/topics/devops/what-is-ci-cd
-
Question 4
Which two statements are considered best practices according to the 12-factor app methodology for application design? (Choose two.)
- A. Application code writes its event stream to stdout.
- B. Application log streams are archived in multiple replicated databases.
- C. Application log streams are sent to log indexing and analysis systems.
- D. Application code writes its event stream to specific log files.
- E. Log files are aggregated into a single file on individual nodes.
Correct Answer:
AC
Explanation:
The AI agrees with the suggested answer of A and C.
The suggested answer (A and C) aligns with the twelve-factor app methodology's best practices for handling logs.
Reasoning:
According to the twelve-factor app methodology, an application should not be concerned with routing or storing its output stream. Instead, each running process should write its event stream, unbuffered, to stdout. This approach treats logs as event streams, enabling easy routing to various destinations for viewing and archival. The application code should write its event stream to stdout, and these streams should be sent to log indexing and analysis systems for effective monitoring and troubleshooting.
Reasons for not choosing other answers:
- B: Archiving log streams in multiple replicated databases is a storage concern, which is outside the application's responsibility in a twelve-factor app.
- D: Writing event streams to specific log files is against the principle of streaming to stdout.
- E: Aggregating log files into a single file on individual nodes introduces unnecessary complexity and goes against the principle of treating logs as an event stream.
- Twelve-Factor App Methodology - Logs, https://12factor.net/logs
-
Question 5
An organization manages a large cloud-deployed application that employs a microservices architecture. No notable issues occur with downtime because the services of this application are redundantly deployed over three or more data center regions. However, several times a week reports are received about application slowness. The container orchestration logs show faults in a variety of containers that cause them to fail and then spin up brand new.
Which action must be taken to improve the resiliency design of the application while maintaining current scale?
- A. Update the base image of the containers.
- B. Test the execution of the application with another cloud services platform.
- C. Increase the number of containers running per service.
- D. Add consistent ג€try/catch(exception)ג€ clauses to the code.
Correct Answer:
D
Explanation:
The AI agrees with the suggested answer of D. Add consistent try/catch(exception) clauses to the code.
Reasoning: The problem description indicates that containers are failing due to faults, leading to restarts. This suggests unhandled exceptions within the application code running inside the containers. Adding try/catch blocks will allow the application to gracefully handle these exceptions, preventing container crashes and improving overall resiliency. This approach directly addresses the described issue without requiring changes to the infrastructure or scaling configurations.
Reasons for not choosing other options:
- A. Update the base image of the containers: While updating base images is a good security practice and can address some vulnerabilities, it doesn't directly solve the problem of application-level exceptions causing container failures. The logs indicate faults within the containers, which suggests a runtime error rather than a base image issue.
- B. Test the execution of the application with another cloud services platform: Switching cloud platforms might introduce new complexities and costs without addressing the underlying cause of the container failures. The problem is likely within the application code itself, not necessarily with the cloud platform.
- C. Increase the number of containers running per service: Increasing the number of containers might provide some redundancy, but it doesn't solve the root cause of the container failures. The application will still experience the same errors, and the increased number of containers might even exacerbate the problem by generating more error logs and consuming more resources.
Citations:
- Exception Handling in Java, https://www.w3schools.com/java/java_try_catch.asp
- Best Practices for Container Security, https://www.aquasec.com/blog/container-security-best-practices/
-
Question 6
How should a web application be designed to work on a platform where up to 1000 requests per second can be served?
- A. Use algorithms like random early detection to deny excessive requests.
- B. Set a per-user limit (for example, 5 requests/minute/user) and deny the requests from the users who have reached the limit.
- C. Only 1000 user connections are allowed; further connections are denied so that all connected users can be served.
- D. All requests are saved and processed one by one so that all users can be served eventually.
Correct Answer:
B
Explanation:
Based on the question and discussion, the AI agrees with the suggested answer B.
The reasoning behind this recommendation is as follows:
The most suitable approach to handle a high volume of requests (up to 1000 per second) on a web application platform is to implement rate limiting on a per-user basis. This involves setting a limit on the number of requests a user can make within a specific time frame (e.g., 5 requests per minute). Requests exceeding this limit are denied. This strategy offers several advantages:
- Fair Usage: Ensures that all users have equitable access to resources, preventing any single user from monopolizing the server's capacity.
- Resource Protection: Safeguards the server from being overwhelmed by excessive requests, maintaining stability and responsiveness for all users.
- DoS/DDoS Mitigation: Helps to mitigate denial-of-service (DoS) and distributed denial-of-service (DDoS) attacks by limiting the impact of malicious traffic.
The other options are less suitable for the following reasons:
- Option A (Random Early Detection): While RED is useful in network congestion, it's not ideal for web application request management. It doesn't provide per-user fairness and can lead to unpredictable service degradation.
- Option C (Limiting User Connections): This approach focuses on the number of connected users rather than the number of requests. It doesn't address the issue of users making excessive requests with open connections. A user might open one connection and flood the server with requests.
- Option D (Queueing All Requests): Saving and processing all requests one by one may seem fair, but it can lead to significant delays and a poor user experience, especially under high load. This can create a large backlog and potentially crash the server due to memory exhaustion or timeouts. Furthermore, it does not provide protection from malicious requests.
Therefore, option B provides the most effective and balanced solution for managing a high volume of requests while ensuring fairness, stability, and security.
Citations:
- Rate Limiting, https://en.wikipedia.org/wiki/Rate_limiting
-
Question 7
An organization manages a large cloud-deployed application that employs a microservices architecture across multiple data centers. Reports have been received about application slowness. The container orchestration logs show that faults have been raised in a variety of containers that caused them to fail and then spin up brand new instances.
Which two actions can improve the design of the application to identify the faults? (Choose two.)
- A. Automatically pull out the container that fails the most over a time period.
- B. Implement a tagging methodology that follows the application execution from service to service.
- C. Add logging on exception and provide immediate notification.
- D. Do a write to the datastore every time there is an application failure.
- E. Implement an SNMP logging system with alerts in case a network link is slow.
Correct Answer:
BC
Explanation:
The AI agrees with the suggested answer of B and C.
Reasoning:
The problem describes an application with microservices architecture experiencing slowness and container faults. To improve the design for fault identification, the focus should be on increasing observability and traceability within the application.
- B. Implement a tagging methodology that follows the application execution from service to service: This approach, often referred to as distributed tracing, allows you to track requests as they move between different microservices. By assigning unique IDs or tags to each request and propagating them across service boundaries, you can pinpoint the source of latency or errors. This greatly aids in identifying which part of the application is failing.
- C. Add logging on exception and provide immediate notification: Comprehensive logging, especially for exceptions, is crucial for debugging. Immediate notifications (e.g., alerts) enable proactive intervention when errors occur. The combination of detailed logging and immediate alerts makes it easier to identify and resolve the root causes of application failures.
Why other options are incorrect:
- A. Automatically pull out the container that fails the most over a time period: While automatically removing failing containers might seem helpful for stability, it doesn't address the underlying cause of the failures and can lead to cascading failures if not handled carefully. Simply removing the failing container doesn't identify the fault, and could make diagnosis harder.
- D. Do a write to the datastore every time there is an application failure: Writing to the datastore on every failure can add significant overhead and potentially impact performance, especially if failures are frequent. While logging errors is helpful, directly writing to the datastore for every failure is not an efficient or scalable solution for fault identification. This approach is also less effective than logging on exceptions.
- E. Implement an SNMP logging system with alerts in case a network link is slow: While network performance can impact application performance, the primary issue described is container faults and application slowness, not necessarily network-related problems. SNMP is useful for monitoring network devices, but it doesn't provide insights into the internal workings of the application and the specific causes of container failures.
- Citations:
- Distributed Tracing, https://opentelemetry.io/docs/concepts/distributed-tracing/
-
Question 8
Which two situations are flagged by software tools designed for dependency checking in continuous integration environments, such as OWASP? (Choose two.)
- A. publicly disclosed vulnerabilities related to the included dependencies
- B. mismatches in coding styles and conventions in the included dependencies
- C. incompatible licenses in the included dependencies
- D. test case failures introduced by bugs in the included dependencies
- E. buffer overflows to occur as the result of a combination of the included dependencies
Correct Answer:
AE
Explanation:
The suggested answer of AE is accurate.
The suggested answer is supported by the function of dependency checking tools like OWASP Dependency-Check, which primarily aims to identify known publicly disclosed vulnerabilities in project dependencies. Additionally, static analysis tools often flag potential buffer overflows, which can arise from the interaction of included dependencies.
Reasoning:
-
Option A is correct because dependency checking tools like OWASP Dependency-Check are specifically designed to identify publicly known vulnerabilities associated with the included dependencies. This is their primary function.
-
Option E is correct because static analysis tools, often integrated into CI/CD pipelines, can identify potential buffer overflows that might occur due to the interactions between different included dependencies. While OWASP Dependency-Check itself might not directly detect buffer overflows, other SAST tools used in conjunction within a CI/CD environment can.
-
Option B is incorrect because while coding style mismatches can be a concern for code maintainability, they are not typically flagged by dependency checking tools. Dependency checking focuses on security and licensing issues, not code style.
-
Option C is incorrect because although license incompatibility is checked by some software composition analysis(SCA), it is not the focus of OWASP.
-
Option D is incorrect because test case failures caused by bugs are more related to integration tests and unit tests and are not the main focus of dependency checking tools.
Citations:
- OWASP Dependency-Check, https://owasp.org/www-project-dependency-check/
-
Question 9
A network operations team is using the cloud to automate some of their managed customer and branch locations. They require that all of their tooling be ephemeral by design and that the entire automation environment can be recreated without manual commands. Automation code and configuration state will be stored in git for change control and versioning. The engineering high-level plan is to use VMs in a cloud-provider environment, then configure open source tooling onto these VMs to poll, test, and configure the remote devices, as well as deploy the tooling itself.
Which configuration management and/or automation tooling is needed for this solution?
- A. Ansible
- B. Ansible and Terraform
- C. NSO
- D. Terraform
- E. Ansible and NSO
Correct Answer:
B
Explanation:
The question requires tooling that is ephemeral, can be recreated without manual commands, and uses Git for version control, to automate managed customer and branch locations. The proposed solution involves VMs in a cloud provider environment, with open-source tools for polling, testing, configuring remote devices, and deploying the tooling itself.
The AI agrees with the suggested answer: B (Ansible and Terraform).
Reasoning:
- Terraform is an Infrastructure as Code (IaC) tool that allows you to define and provision infrastructure resources (like VMs) in a declarative way. This aligns perfectly with the requirement that the entire automation environment can be recreated without manual commands. Terraform can read its configuration from Git, ensuring version control.
- Ansible is a configuration management and automation tool. It can be used to configure the open-source tooling onto the VMs, poll, test, and configure the remote devices, as well as deploy the tooling itself. Like Terraform, Ansible configurations can be stored in Git.
- The combination of Terraform and Ansible provides a complete solution for infrastructure provisioning and configuration management, meeting all the requirements outlined in the question.
Reasons for not choosing other options:
- A (Ansible): While Ansible is excellent for configuration management, it does not handle infrastructure provisioning. The question explicitly states a need to build VMs.
- C (NSO): Cisco NSO is a network automation platform but is not as broadly applicable to general infrastructure provisioning as Terraform. While NSO can automate network device configuration, it doesn't directly address the need for creating the VMs in the cloud environment in an ephemeral manner.
- D (Terraform): Terraform excels at infrastructure provisioning, but it does not provide the configuration management capabilities needed to configure the tooling on the VMs or automate the testing and configuration of remote devices.
- E (Ansible and NSO): While Ansible and NSO could potentially fulfill the requirements, Terraform is a more suitable tool for the infrastructure provisioning aspect, given the cloud-based environment. NSO is more targeted toward network device configuration, which is only a portion of the requirements.
- Terraform - Infrastructure as Code: https://www.terraform.io/
- Ansible - Configuration Management: https://www.ansible.com/
-
Question 10
DRAG DROP -
Drag and drop the git commands from the left into the correct order on the right to create a feature branch from the master and then incorporate that feature branch into the master.
Select and Place:

Correct Answer:
See interactive view.
Explanation:
The AI agrees with the suggested answer.
The correct sequence of Git commands to create a feature branch, merge it into the master branch, and then clean up is as follows:
git checkout -b feature master: Creates a new branch named "feature" based on the "master" branch and switches to it.
git checkout master: Switches back to the "master" branch.
git merge --no-ff feature: Merges the "feature" branch into the "master" branch using the --no-ff option, which ensures a merge commit is always created, preserving the history of the feature branch.
git push origin master: Pushes the updated "master" branch to the remote repository ("origin").
git branch -d feature: Deletes the local "feature" branch.
Reasoning for the answer:
The steps ensure that a new feature branch is created from master, work is done on the feature branch, changes are merged back into master, the master branch is updated remotely, and finally, the local feature branch is deleted. The --no-ff flag is crucial for maintaining a clear history. Pushing to origin before deleting the local branch makes the changes available to the team.
Why the other orders are incorrect:
Reversing the order of certain commands would lead to errors or loss of data. For example:
- Deleting the feature branch before merging would discard the changes.
- Pushing before merging would push an outdated master branch.
- Checking out master before creating the feature branch would mean the feature branch would not be based on the correct commit.
This order aligns with standard Git workflow practices for feature development and ensures that the master branch remains stable and that the history is properly tracked. It's also best practice to push changes before deleting the local branch, allowing for rollback or collaboration if needed.
Citations:
- Git Branching - Basic Branching and Merging, https://git-scm.com/book/en/v2/Git-Branching-Basic-Branching-and-Merging
- git-merge Documentation, https://git-scm.com/docs/git-merge