.NET Software Architecture Interview Questions

Software Architecture Interview Questions - Set 1

You have a large new project that you are the first person assigned to. What are your initial concerns and actions? 


My initial actions will be to establish the key organizational factors that will facilitate the success of the projects. We can summarize these actions with the acronym VRAPS. 


"V" is for "Vision". You need to establish clear guidance for the project's definition of success. 


"R" is for "Rhythm". You need to determine the milestones for key deliverables. Dates are not as important as milestones in the initial stage of the project. The milestones do not necessarily need to include working code, but should always be concrete deliverables. This might include artifacts such as feasibility studies or requirements documents in addition to code. 


"A" is for "Anticipating Problems." This is the area where the experience of a seasoned architect is most important. Potential problems will span technology, analytical, process, and interpersonal domains. For a seasoned architect, anticipating technology and analytical problems is reasonably assumed. Most problems in a project result from process and interpersonal issues, however, so those would be the key focus areas for problem anticipation. 


"P" is for "Partnering". This is the most important of these five concerns when starting on a large new initiative. You will never be able to complete a large new initiative on your own. More importantly, there will be times when you will be asking others for extraordinary efforts and to devote their valuable time, keeping in mind that these necessary stakeholders do not report to you. Critically, you cannot rely on relationships that you have not developed in advance, so a competent architect cultivates relationships ahead of time, spending perhaps 15% of his time just on relationship-building. 


"S" is for "Simplification". Most new software systems are - by necessity - very complex. You want to keep systems as simple as possible, but no simpler. Having said that, every simplification results in exponential reduction of effort and improvement in reliability. Key concerns here are producing a "minimum viable product" first, avoiding "gold-plating" the requirements, and being judicious about choosing appropriate technologies for their fitness rather than their coolness. 



You have a high transaction volume web site that is working very slowly in production, but the problem is not urgent. What areas would you turn your attention to first? 


Production or Test? If the problem is said to be occurring in production, but is not known to be occurring in test now, the first thing we will need to do is determine if the problem can be replicated in test. Generally speaking, if we have not seen the problem in test it is because the scenario has not been attempted in test or, more often, that we have not that much data in test. As we first approach the problem, we just want to know if such a test environment is available to reproduce the problem. We will not act on this information, but we will come back to that later. 


Measure. The next step in the diagnosis will be to look at whatever measurements are available. When assessing a performance problem, a bit of speculation can be helpful, especially in the context of previously observed behaviors with the architecture, but we always want concrete measurements before we take action. 


Use Built-In Tools to Isolate the Problem. Some measurements will typically be available from the production environment by default, but we will want to leverage operating system tools and custom tools to get more information from each of the systems in the chain of systems that are exchanging data. This will allow us to at least isolate the individual platform or subsystem that seems to be having the most negative effect on overall system performance. That isolation is key because it will allow us to focus our efforts. More importantly, it will allow us to engage specific team(s) of subject matter experts. 


Four Focus Areas. For each tier of the system, we will want to obtain information for four items - CPU consumption, memory consumption, disk accesses, and network transfers. Network transfers generally do not bottleneck for modern systems, but we nonetheless want to take at least a cursory review of the network access. 


Generally speaking, we will find that one or more of the four key potential resource bottlenecks is "saturated". By "saturated", we typically mean that the resource is being consistently used at more than 80% of it's capacity. It is a common misconception that the resource has to be used at 100% to be slowing the system down or that any kind of spiking to 100% utilization implies a problem. Rather, if the resource is being used over 80%, there is typically a lot of spiking, but some spiking is natural. 


Once we have isolated the specific "saturated" resource, we can then use specific analysis that is customized to the isolated module to do further analysis. 


If we isolate a problem to the database level, there are many specialized techniques that we can pursue. This is important to keep in mind, as the most common source of performance problems is a database. 


Profiling. If we find a slow algorithm on the application tier or JavaScript, the next logical step is typically to do further isolation with a profiling tool. 



You have to design a new web site for optimum scalability. Assume that the system is the size of Facebook. What are your design considerations? 


It is important to distinguish between "large" and "massive" distributed systems. For the sake of argument, let's assume that we understand how to scale "large" systems and limit discussion to "massive" distributed systems like Facebook or Google. 


In addition to the traditional techniques used for vertical and horizontal scaling, "massive" systems typically employ some or all of the following techniques:

 

(1) Vector clocks to accommodate differences between system clocks. 


(2) Distributed hash tables (DHTs) to distribute key-value pairs across multiple servers. 


(3) The Quorum Protocol to work around the limitations of the CAP theorem with regard to consistency and availability. 


(4) The Gossip Protocol to distribute information on server failures. 


(5) Hinted handoffs and Merkle trees to allow for late updates of servers that are temporarily unavailable. 



Can you walk us through a major new application that you designed and how you did it? 


Among many other projects, I was the architect for the FORTRESS system in 2003 while I was at Fidelity Investments. This system replaced the authorization systems for 36 of Fidelity’s business units. The design was based on research by Professor Rose Gamble of the University of Oklahoma in hierarchical Role-Based Access Control (RBAC) systems. The research demonstrated that you could use a rudimentary RBAC system and layer many difference hierarchical interfaces on the same data for many different sets of stakeholders. An example of why the system was needed can be found with the way a company is organized. Indicative data for one person may be reflected in a management hierarchy, a geographic hierarchy, and a business unit hierarchy. This new system allows any arbitrary hierarchical organization to leverage the same data. 


We introduced a new database with .NET code libraries borrowed from other projects. The database had every field with built-in audit records based on triggers. We exposed all interfaces through XML web services. We worked with each of the affected 36 business units to ensure that it was integrated correctly.
 


 Tell us how you handle conflict? How do you handle conflict with managers? 


Some conflict is natural in business - and it can be positive if it handled correctly. This is especially true when you have to make difficult tradeoffs or when the team is under pressure. 


There are really four cases to consider when managing conflict as an architect: when you are the decision-maker, when you are dealing with peers, when you are dealing with senior management, and when you are dealing with customers. 


First, let's address conflict with "managers". As an architect, you are a "de facto" manager or director and should act accordingly. An Architect is not an individual contributor, so you should not be too concerned with titles, but generally treat most team or departmental managers as peers. 


When you are the decision-maker, you don't want to be afraid to make a decision, but you want to listen to others and establish a "consensus". A consensus is not where people get to vote on a solution, but where you carefully get the input of others, but ultimately are responsible for making a decision. 


When you are dealing with your peers, whether technical or managerial, you will ideally have devoted the time to establish a relationship ahead of time. That will go a long way towards mitigating the situation. Generally speaking, you should be cordial and consider the long term relationship with your peers. Be willing to compromise where appropriate. Sometimes you will have to escalate something to your senior management team, but most in senior management would prefer that you work out conflicts on your own. 


When you are dealing with senior management, perhaps your own manager, this is a case where someone can override your decision-making authority. It is true that some developers - some immature developers - may not manage a situation like this well, but it is important that you respect senior decision-makers and accept their judgement. It is OK and generally recommended, however, to ask if there is room for further discussion once or to ask for your judgment to be respected. You may be very sure that your direction is best, but you may be wrong and it really doesn't matter - you are paid to execute your manager's wishes.


When dealing with customers, a degree of conflict may actually be more common than with the other scenarios. This is especially true because architects are often pulled in to rescue very difficult situations. In this situation, you will want to be unfailingly polite even if the other party is not. As an architect, you want to be unfailingly honest as well, perhaps even when it does not put your organization in the best possible light. Handling customers is sensitive and does take a lot of experience. If you are not experienced with dealing with customers, you should partner with someone who is. If you are dealing with conflict situations with customers by yourself as an architect, it is often a good idea to get some backup from a Product Manager, a salesperson, or perhaps even your manager. 


What is your approach to technical mentoring?


As an architect, you may be mentoring managers, QA personnel, or system analysts, but the most common mentoring scenarios will be for new hires, experienced developers, and senior developers. In these cases, you are mentoring people to get to the next level in their careers. Specifically, you want to mentor new hires into experienced developers, experienced developers into senior developers, and senior developers into architects. 


In each case, it is best to set expectations of the person you are mentoring that because software development is a complex profession, there is no "quick" way to move forward in your career. Rather, if an engineer is willing to listen and follow your advice, they will notice an improvement in their skills over a period of months as long as they are practicing the skills that you are mentoring them on. You will also want to recommend a professional reading list and be prepared to discuss the reading materials from your own experience. 


For new hires and experienced hires, the best advice you can give them is to focus on writing code all day every day. Until an engineer reaches the senior developer level, they should be simply writing as much code as possible and engaging in a structured program of study. You will know when a developer has reached the "senior" level when they can typically write code for a full day without needing any advice. 


For new hires, you want to get them writing correct code first while you are equipping them with the skills to work successfully as part of a development team. These would include simple things like source control management, unit testing, working with quality assurance, and software process. 


Experienced developers, typically with 2-5 years of experience, are assumed to be writing correct code reasonably well and to act as part of a team, but cannot typically go for a full day without seeking additional technical guidance. Developers at this experience level are typically not good at estimating either. You will need to mentor them on topics to expand their breadth such as databases, networking, dealing with customers, software deployment, and requirements analysis. You will also want to help them expand their depth on topics like automated testing, object-oriented programming, documentation, and generally refining their coding techniques. As mentioned earlier, you will need to mentor them on estimating as well, focusing on considering all of the tasks and making the estimates granular. 


To mentor a senior developer, typically with at least five years of experience, to an architect, it is best to set expectations of 18 months of hard work. The goal is to enable the developer to lead small teams and to be able to make complex technical tradeoffs. Ideally, they should fill in for other roles such as system administrator, DBA, and system analyst as opportunities arise. You will want them to mentor other developers, learn project management, write documentation, lead technical discussions, and do a lot of diagramming and whiteboarding. For a senior developer to grow, they need to be independently responsible for significant portions of complex technical initiatives. 


You have a team member that does not work well with others. How do you manage that? 


This sounds like a people management question and it is, but as an architect, you have to be able to handle people management concerns as well. 


It is important to keep in mind that developers add value primarily through their skill set rather than their ability to work with others. Nonetheless, you would not want to hire someone new on your team who has difficulties working with others. If they are already on your team and adding value, however, you have to manage the situation. 


First, you need to ask yourself why the person does not get along well with others. If the person perhaps has a substance abuse problem or a serious personality disorder, then that is different than someone that is simply difficult to work with. 


Barring the unmanageable cases mentioned above, you will generally find that some people can work well enough within your team and others cannot. In both cases, the main thing you will want to ensure is that most communications outside of your team are going through someone else. That could be a manager, an architect, or a business analyst, but the typical case is that someone who has difficulty with relationships can at least be accepted within their team. 


For those who cannot work well within their own team, but nonetheless offer business value, the typical approach is "firewalling", in which the person has minimal interaction with other team members besides senior personnel like managers or architects. 


It is easy to assume that we could simply eliminate an impolite person from the team, but it's really a matter of tradeoffs. If, notwithstanding the personality concerns, the person still adds value, you need to find a way to manage the situation. 



You have been assigned as the architect for a large project. What does your day-to-day work look like? 


An architect may be working on several projects at the same time, but this answer only addresses the day-to-day for a single large project. 


The day-to-day depends on the stage of the project in the software development lifecycle. Typically, the hard work for the architect on a large software project will be at the beginning stages of the project and near the end. The project planning, estimating, and design happens at the beginning. Deployment, quality assurance, information security, and sustainment activities happen near the end of the project. While the bulk of the work in a project is the actual coding, the coding is actually the easy part of the project - as long as the project initiation activities have been done correctly. 


The day-to-day activities during each of the three abovementioned stages of a project will be different. 


During the initiation stage of the project, an architect may be filling in for roles other than his own, to include project manager, product manager, or system analyst. In a typical software project, the staffing may not be set right away, so it is pretty typical to be handling at least some work outside of the traditional architect role. Even if you do not need to work actively on other projects roles, you will at least need to work closely with those who are and to review their work. 


If an architect is doing his job correctly, the project initiation phase will typically be the busiest time of the project. It is probably not a good time to plan vacations or 40 hour work weeks. The reason this phase is the most important is that every action you take at the start of a project either reduces risk or reduces uncertainty. A small improvement in design, requirements, planning, or staffing can pay big dividends later in the project. It is cliche, but true, that the time or cost savings from early positive adjustments may be 100 times greater than the time and cost spent in planning. 


Another key aspect of early project design and planning is that decisions need to be made on what you actually need to do to accomplish goals. Other members of your team may know how to do things once clear tasks have been decided on, but will typically need an architect's guidance on determining the appropriate tasks and the relative priority of those tasks. 


The last key aspect of the early stages of a project that makes it a busy time for the architect is the technical tradeoffs involved in designs. There are typically many different ways to accomplish a technical goal and it may very well be that no one way clearly satisfies all goals the most effectively. While you may be able to evaluate some of these tough tradeoffs off of your existing knowledge, it is more likely that you will have to research and prototype carefully. More importantly, you will want to discuss the alternatives with others and document them carefully. This takes time. 


The end phase of the project is also a busy time for an architect. As the project nears the end, there are concerns with schedule, quality assurance, information security, deployment, and support. In most cases, the development staff has limited experience with these issues and they are probably involved fixing defects anyway. Addressing these issues must often be done quickly, requires intense work with other teams, and frankly requires a higher degree of technical precision than writing code. 


Towards the end of a project there is often a period where the entire team is fixing defects or writing tests. This is a good time to contribute and set the kind of example that motivates the team. 


Towards the end of a project there will typically be a lot of documentation that needs to be written. Most developers either won't do it or are not very good at it, so you will typically have to write a lot of the documentation yourself. This is especially true for technically-oriented documentation. 


We have discussed the beginning and the end of the project as the busiest times for the architect. It seems counterintuitive that the implementation phase would be the least busy for the architect when it is the busiest time for the rest of the team, but it is important to bear in mind that architects make the plan for team to execute. While the architect may have a defined role contributing some development work in sprints, that is not generally a best practice. On a large projects, there will be adjustments, escalations, mentoring, and other activities to keep an architect busy, but less than at the beginning and end of the project. It may even be a good time to take a vacation! 



Can you describe the pros and cons of multithreading?


Multithreading allows a computer program to multitask, thus often leading to performance improvements due to greater throughput of the program. Multithreading cannot help every program, but is typically very useful when processing multiple workloads concurrently. 


From a positive perspective, multithreading allows us to use more than one CPU at a time. It also allows us to allow other threads of execution to continue when others threads are blocking, typically when waiting for I/O or other non-CPU resources. There is a lot more that could be said about the positives, but simply using otherwise unused CPU cycles is a clear advantage. 


There are several negatives of leveraging multithreading. All of the threads share the same memory, so care must be taken to ensure synchronized access to that shared memory. Another disadvantage is that there is some overhead to managing the multithreading, which typically means that a single thread of execution will run a bit slower than it would if the program were designed for running with a single thread of execution. Perhaps most importantly, it typically takes a fair amount of extra time to code and test multithreaded programs. 



Give us some examples of when you have deliberately used design patterns. 


There are many different kinds of design patterns, but generally we are referring to the 23 patterns that were introduced by Erich Gamma and the "Gang of Four" authors in 1994. Only about half of these patterns are commonly used, so it is unreasonable to ask whether someone has deliberately used all of them. 


When considering the use of design patterns, it is important to understand that you may have been deliberately and routinely using design patterns, but did not realize that you were doing so. Studying design patterns is helpful to refine and understand what you have been doing as well as refine and extend it. Perhaps more importantly, design patterns extend and refine basic object-oriented design principles, so if you want to do OO design effectively, studying design patterns can be helpful. 


Singleton: People often speak to the times they have implemented a singleton pattern because it is a pattern that is often helpful and needed and must be implemented explicitly. I have implemented on many occasions to avoid having to load multiple instances of the same class for multithreaded operations. 


Factory: I have implemented Factory pattern to load different data structures from a data dictionary to implement a data gateway. The classes that were produced by the factory depended on the type of the data that is being used. This was then extended to an abstract factory pattern by using reflection to load class instances for data structures that were not knowable at compile time. 


Decorator: I have used decorator to provide security for web services. We implemented an unsecured version so the service could be imported as a .NET library, but had to make sure that there was a clean security model for external web service calls. Essentially, the internal and external code implemented the same interface and implemented the security code as middleware.

Proxy: Simply implementing web services based on WSDL documents - which generate web service proxies - is an example of using proxies deliberately.

Adapter: Adapters have been used many times to solve a variety of integration problems. This was especially true when implementing security systems that had to leverage many different underlying authorization data stores. 


Chain of Responsibility: Any time middleware was used, the chain of responsibility pattern is used deliberately. This is pretty normal when implementing ASP.NET MVC on .NET Core. 


Iterator: This is a very common pattern that is used every time a foreach loop is used.

Bridge: This is a pattern I have used often with refactoring to split up a monolithic class or break a class into layers.

Observer: This pattern is used with events, typically in a publish/subscribe mechanism. 



How would you use UML to describe the design of a modest application? 


Unified Modeling Language (UML) has nine different types of diagrams intended to address all phases of the software development lifecycle, but they vary in usefulness. UML was very popular in the late 1990s when we were building large-scale systems with the waterfall method and had to specify the design of the entire system up front as much as possible.

When using UML, it is important to understand that it is really just for documenting and diagramming a design. With Agile methods, designs change more often, which suggests that we would have to update the diagrams as well, which can be time-consuming. 


The main value of UML is to communicate design details in an unambiguous way to technical stakeholders, especially analysts and developers. If the development of such diagrams does not contribute to those communications, then the diagrams are not worth creating.

The three types of diagrams that tend to be the most useful include sequence diagrams, class diagrams, and use case diagrams. 


Sequence diagrams are useful for communicating the flow of messages (or method calls or events) from one system component (or object) to another. This is a very helpful vehicle for communicating with system analysts. 


Class diagrams demonstrate the composition of various classes and the relations between those classes. This is useful for communicating intent to developers.

Use case diagrams allow us to capture the actions of people and systems at the edge of our system. This is another diagrams that is very helpful for communicating with analysts, but can also be used to inventory the features of the system with less technical stakeholders. 



What do you do to stay up to date on technology?

The answer is simple - constant study. The important thing with studying is not just to study a lot, but to study the right topics and to use the right resources to study. 


Theory and Practice: Software development and architecture is a hands-on profession. You need to be able to write code to perform a task rather than just read about it. This means that when you are studying, you should be writing some code as well. How much code you write when you are studying probably depends on how much code you write at work. If you are coding all day at work, then perhaps coding to learn would comprise about 25% of your learning time. If you don't spend a lot of time writing code, then you should probably spend about 50% of your time writing code. 


Pluralsight: I have to mention www.pluralsight.com simply because I believe in it a great deal. You can get a lot of free resources from the Internet, but as a professional, your time is important. A Pluralsight membership does not cost much - $25 monthly when this was written in 2018. While everyone learns differently, most people agree that well-presented online courses are a relatively easy way to learn. 


I like to recommend Pluralsight because they simply have thousands of consistent, well-curated and well-coordinated course offerings across hundreds of different IT topics. 


New Technologies: You don't necessarily need to spend a lot of time studying the latest technologies. It is more important to master core technologies for your platform of choice. This helps you to establish a framework into which learning new technologies tends to be a natural thing to do because most of them are extensions of your core knowledge. 


Pick a Platform: You can really only "master" one platform stack. The major stacks are the Microsoft stack and the Java stack, but there are other stacks as well. Mastering "core" skills means learning everything you. For instance, mastering the Microsoft stack would means you understand SQL Server, C#, ASP.NET, and .NET Core very well. 


The Internet: While there are some software development positions that don't require understanding the Internet, most positions do. It is essential to understand at least some IP networking, network protocols, and light web development with HTML 5, Cascading Style Sheets (CSS), and JavaScript if you are going to be working in information technology. If you don't know much about the Internet and want to know the best place to get started, check out https://w3schools.com 


Specialization: If you are an architect, this generally implies that you are not a "specialist" in a particular technology such as SQL Server or Sharepoint, but rather more of a technical generalist. In some cases, it could be profitable to specialize in something, but it is generally better to spend your time to mastering the other skills mentioned here. 


Business and Leadership: Business skills and leadership skills are often confused with one another, but are really separate and distinct. You will generally not need to know a lot about business topics like an MBA would. In all likelihood you will be surrounded by business professionals throughout your career. You will likely never have to master skills like, accounting, finance, sales, and marketing. 


You should, of course, consider learning all you can about the domain you are working in. That is, if you work for a transportation firm, you should endeavor to learn a lot about transportation. Generally speaking, the best domain knowledge will be the domain knowledge that is specific to the job you are in. One of the key differences between architects and senior developers is that architects do understand the business of the company that they are working for. 


Leadership, on the other hand, is the most valuable skill that anyone can have. It is true that some people are "natural leaders". That is generally not true of developers who rise to architect level roles, as they are generally introverts. Whether you are good or bad at leadership, you can learn through training and experience. Take every opportunity you can to learn and exercise leadership skills. 


Project Management and Process: You will want to learn a lot about project management and process. You can't rely on project managers and product managers for the same reason that project managers cannot rely exclusively on architects for all technical assessment and planning. Even if you are working with the world's best project manager, you still need to advise them, understand them, and devise actions based on their plans. In many cases, you have may have to manage the project or work with an inexperienced project manager. In either case, you will find project management and process skills to be very valuable to you. 


Just In Time Learning: Sometimes the most effective learning you can do is "just in time" learning". That is, as you start working with new technologies and methods, that is a good time to improve and maintain your skills with those technologies and methods.


How much hands-on work do you do? 


As a typical architect, you will probably do about 50% hands-on work. There are some people with an "architect" title that do less (or even none) and some people in an "enterprise architect" role that may have little reason to be doing a lot of coding even if they are capable of it. Most architects assigned to a large project, however, will be doing about 50% hands-on work. 


You don't generally want to plan for doing a lot of defined sprint development work, but there will be adjustments, prototyping, escalations, filling in for other team members, support, automated testing, deployment, builds, and other activities to keep an architect busy with hands-on work. The work will find you without your having to look for it. 



How do you conduct code reviews? 


Code reviews should not be confused with code inspections. A code inspection is intended to catch flaws and is necessary for some types of very sensitive applications. Code reviews are often called "peer reviews" to distinguish them from the more invasive code inspections, which are really another topic. 


Code reviews are probably the most overlooked means of improving a team's quality and productivity. The best organizations "require" code reviews before allowing changes to be pushed to quality assurance. 


You will certainly find the occasional error in a code review - this is especially true for junior developers -but that is not the only benefit of code reviews, not is it really the intent of code reviews. 


The main benefit of code reviews is not to make the developer stronger, but to make the software development organization stronger. Reviews give an opportunity to ensure that we are following "best practices", following team norms, creating sound estimates, unit testing code the right way, ensure that nonfunctional requirements are being properly addressed, and mentor junior developers. There are some that would suggest that this is a good opportunity to enforce coding style constraints, but it is generally OK to allow some flexibility in coding styles as long as the code can pass the necessary tests. Another hidden benefit of code reviews is "peer pressure" - if a developer knows that his peers will be reviewing his code, he is more likely to be more cautious with it. 


As to how to conduct code reviews, it is important to understand that you cannot review everything. You need to pick the parts of the code that are the most important and focus on those parts.

There are some that suggest that code reviews can be done as a team, but that takes a commitment of a lot of hours that you probably don't have. You will generally be better off by assigning the review to one person, then letting the author and the reviewer work together. 


Code review findings should be documented. The ideal is to capture the code review findings in the source control system. Indeed, most popular source control systems already include support for code reviews. 


The ideal code review should include some preliminary discussion of the scope of the review and some discussion afterward when the findings have been addressed.

Our organization already has an optimal architecture? How will you work within that constraint?

This is a loaded question. No significant system has an "optimal" architecture. This suggests that perhaps the questioner is misinformed, but you may not want to suggest that during the interview. 


The best answer here is to suggest that you will assess that for yourself should the organization decide to hire you. Your assessment would include conformance with requirements and architectural qualities as well as determining cost-effectiveness and the ability to respond to changes. You can also assess for the applicability of new technologies, plan for the future, documentation, supportability, etc.

The truth is that there is no such thing as an "optimal" architecture generally. An architecture can only be optimized with respect to a set of requirements and qualities for the same reason that the architectures of buildings are different. 



Our organization has a lot of process problems because of the pace of our growth. How will you handle that?

This sounds like more of a project management problem than an architecture problem, but process is a big art of what a real architect does. Having said that, this is more of a management problem. When an organization is suggesting that they have "process problems", there is legitimacy to that, but they are essentially saying, "We don't know what to do next". They need a plan and they need to start executing against the plan. You may be a little concerned about your ability to do this correctly but if you are an experienced architect, you should be confident in your ability to make good decisions on this.

First, if the organization has come to a standstill or there are clearly people that are not being productively utilized, you still need a plan and you'll want to come up with one quickly. A day is OK, but an hour is better. For the moment, however, let's assume that there is no such crisis - there normally isn't - and that you have a couple of weeks to revise plans. 


Before you can act, you will want to know at least four things: 


(1) Background. You will want to assess the situation by getting some background information on their current situation and determining what the organization's objectives are. A good start for situational assessment can be found on the "Project Risk Assessment" checklist on the downloads page. That will give you some background information and can guide interviews with key stakeholders. 


As part of your situational assessment, you will want to determine the technical, infrastructural, time, and personnel resources you have to work with. Some whiteboard sessions to get a handle on the technology would probably also be appropriate. 


(2) Objectives. To determine the objectives, you will want to identify the "Product Manager" in the organization. Most medium to large sized companies have someone identified as a Product Manager - someone who translates business plans into requirements and high level IT action plans, but may not necessarily be a technical IT person. If the organization has no Product Manager, someone is still performing that role and you will want to seek that person out to establish goals. That person could be the CEO, COO, CIO, or a project manager. 


(3) Who is in Charge? You may be the decision-maker, but unless you own the company, you will still have one or more people you are accountable to, typically many. You will need to find out who they are and what there areas of influence are. 


(4) How do we Communicate? There will always be some type of reporting structure, albeit an informal one. You will want to start exercising that communication structures to report on actions and get decisions to act on. 


Once you have established the organizations goals, it is time to start assigning some concrete actions and assigning them to people based on your assessment of resources and goals. You may not get all the task assignments correct right away, but planning and execution is an iterative process, so be satisfied with continuous improvement. 


Once you get the organization stabilized, you do want to establish a software development lifecycle and you should be able to do so by simply iterating through the steps mentioned above. 



How do you properly leverage asynchronous queuing systems? 


Queuing systems are, by their nature, asynchronous. The key difference between a queuing system and other asynchronous systems is that queuing systems store their messages persistently. This opens up a number of complexities, but it also opens up a number of great architectural patterns that are well worth the added complexity if those patterns will benefit your application. 


The key realization about the advantages of queuing systems are that they add three degrees of freedom to systems that do not use queuing systems. Not all systems need these degrees of freedom, but for systems that do, queuing systems should be actively considered. 


Advantage #1: Different Times. For most client/server systems, we assume that the systems are connected. If the connection fails, then functionality that depends on the connection fails. For queuing systems, the component that processes a message does not need to be connected to the system that sent the message. In fact, by definition, the two systems are not connected, but delivery of the message is guaranteed. This is advantageous because either the sender or the receiver could be offline for a period of time and the message can still be processed. 


Advantage #2: No Constraints on Structure. As discussed earlier, most client/server systems are connected, but queuing systems are not. Since they are not connected, both the sender and the receiver have to know how to communicate with the queuing system, but the sender and the receiver do not need to know how to communicate with each other. This is advantageous because the communications on either end could change without affecting one another. 


Perhaps more importantly, one sender can communicate with many receivers and many senders can communicate with one receiver. Consider sending out bulk e-mail messages - they can go to many receivers, but are still considered successful even if some of the receivers do not open the e-mail in a timely manner. 


Advantage #3: Insulation from Network Complexities. The sheer complexity of communicating between various computer systems can be very daunting. It is not difficult when all of the systems are from the same vendor or using the same network protocols - homogeneous systems - but when you need to connect different types of systems, this is problematic. Admittedly, the near-universal applicability of IP networking has made this point less relevant, but this does remain one of the key advantages of queuing systems generally. 


Having specified the advantages, it is worth noting that you will need additional infrastructure, training, and licensing expenses to leverage queuing systems. If the advantages of doing so will help you, though, then it is usually an easy decision to start leveraging queuing systems. 


Lastly, you can leverage queuing design patterns even if you don't have a third-party queuing system. As long as you can store the messages persistently, you can develop a lightweight queuing system on your own, perhaps simply by the sender and receiver exchanging messages via a database table. 

Software Architecture Interview Questions - Set 2

What are your thoughts on continuous integration and continuous delivery (CI/CD)? 


Continuous Integration (CI) is the process of building and testing your software as soon as it is checked in, either with each change set or periodically. Continuous Deployment (CD) is the process of deploying the build products to environments in a fully automated fashion. 


First, you have to ask yourself if CI/CD is suitable for your project. It tends to be a substantial investment for any project, but pays off handsomely for all but the smallest projects. It is noteworthy that most of the most important software systems in the world leverage CI/CD heavily. Let's go over the advantages of CI/CD. 


Advantage #1: Find and Fix Errors Fast. You will be running tests all of the time and the tests will be monitored for success. The moment a test fails, your team will know right away and be able to correct the problem before it ships to a downstream environment and interferes with production or test. 


Advantage #2: Incremental Feature Shipping. You will be able to ship small numbers of features, even one feature, in a deployment. This sounds inherently risky if you assume the state of software test and deployment prior to the advent of effective CI/CD, but in a modern software development environment, the risk of the incremental deployments is sufficiently low that they are worth doing for all but the most risk-averse environments. 


Advantage #3: Lower Costs. The fast fixing of the errors and the fast deployment of new features ends up controlling/reducing costs quite a bit - this is a good selling point for the management team. 


Having establish that CI/CD is clearly advantageous, we need to establish which steps we should take to implement it correctly. 


Step #1: Automated Testing. Setting up automated tests is the foundation of any CI/CD program. It is pointless to pursue a CI/CD program without automated tests. While it is easy to get started with automated testing, the key questions before getting to step #2 is whether or not you have enough automated testing in place. While the answer to that is inherently subjective, there are a couple of helpful points, keeping in mind that we are not yet moving to continuous deployment. 


First, at this stage you probably do not need to concern yourself with automating testing for the user interface portions of your system. While it does add value, it is also more time-consuming than automating the testing for other parts of the system and typically adds less value than the testing for the core system components. This may not be intuitive for a business stakeholders, but it is generally true that the most complex parts of a system are not in the user interface. The other guideline is "code coverage" - the percentage of lines of code in the tested modules that is actually being tested. Before you can have an effective CI program, you will generally want to be covering about 80-85% of the code in your system for those modules that you are doing automated testing for. There are many good code coverage tools, many of which are built into tools like Visual Studio. 


Step #2: Continuous Integration. Once you have a sufficient amount of automated testing, you can move onto CI. That is, you can integrate the testing into your build automation. This assumes that you already have build automation set up. If you don't, that is a different topic. 


If any of the tests fail, then the build fails. This is actually a very tricky phase of moving to CI/CD simply because the tests that may work on a developer's desktop may fail when they are run on the build servers for a wide variety of reasons. 


The main issues you will typically need to address before getting all of the tests to run cleanly are making sure that the test setup is done correctly, ensuring that all developers are using the latest version of the code every day, and fixing some tests that may not be deterministic. 


Step #3: Full Deployment Automation. If you have completed step #2, you have CI running and now need to complete CD. 


You have a means to install the software product, but at this point it may be manual and perhaps even undocumented. You will want to fully automate the deployment, typically by scripting it. You may have seen installers for various commercial products, but such products a re typically intended only to run on a single platform or device rather than being a complex client/server system. You may be able to use installers as part of your installation process, but for complex products, that will only help with individual components of the install. 


More typically, you will want to use some type of scripting language. On Windows, for instance, Powershell would be a good choice. You will want to start off by manually installing the product and writing down all of the steps. Then you simply replicate all of those steps with the scripting language. 


In practice, this tends to be difficult because there are many things that can and will go wrong with product installations whether they are manual or automated. Additional details are for another question, but you have to think ahead to things being missing, things taking time, operations failing, security concerns, and other IT concerns when you are scripting an installation. 


Step #4: Rollback. If you are rolling out a new software feature, you may find that it is not working, there could be requirements problems, it could be interfering with other things, or there could be a variety of other problems. 


You will need to ensure that you have a rollback strategy to uninstall changes and restore the previous state of the application. This is not difficult for simple applications, but if the changes affect multiple platforms and components, it could get complicated. 


The rollback plan does not necessarily need to be scripted because it will presumably not be used often. It does, however, have to be a documented and tested plan with the appropriate components staged and ready. 


Step #5: Staging. Once you have automated deployment and rollback plans that have been tested, you will want to start using those deployment scripts to move the components to test environments. You will not want to use the new deployment scripts in production right away. If the procedures are complex, you will likely experience some problems that you did not and could not anticipate. You will want to go through some release cycles monitoring for and fixing problems until you have a high degree of certainty that the automated deployments are working effectively. 


Step #6: Production. By this time, you have completed all of the preliminary tasks and have fully functioning CI/CD. Enjoy it! 



How do you balance deadlines, quality, and customer commitments? 


There is no good answer to this question because it depends on the organization's priorities. When you have to make a tradeoff between these constraints, this is really more the job of the "Product Manager" rather than the Architect. Let's assume, however, that you have to make a decision. 


When we think about customer commitments, they are about delivering quality on time, so if you have to balance additional customer commitments, it is either because some unexpected support work is creeping in or there is a process problem because your team is being overloaded. This is not uncommon, of course, but it is a process problem that needs to be addressed while you are improvising a bit to get past it. In a situation like that, you may get lucky with "heroic action", but that is not a repeatable plan. 


That leaves the tradeoff between deadlines and quality. This one is a little more subtle and subjective. The problem is that customers are paying for both quality and timeliness. 


In cases where timelines simply can't be changed - and there are some of those, with the so-called "Year 2000" problem being a great example - you really want to explore possibly cutting some functionality rather than quality. In most cases, however, you may have some flexibility on deadlines. 


It is important to bear in mind that deadlines can be changed easily, but the quality of a product cannot be changed easily. Consequently, you should be biased towards asking for a schedule slip rather than compromising on quality. 


When asking for a schedule slip, keep in mind that customers reasonably anticipate that you will ask for a schedule slip - once, anyway. If you are going to ask for a schedule slip, you should ask for the biggest adjustment you can get away with since you will damage your credibility if you ask for more than one slip. 


In summary, when you have to choose between deadlines and quality, you will generally want to advocate for higher quality. 



Where do you see yourself in five years? 


This is a tough question in an interview because you may very well have aspirations for more senior work than being an Architect. Having said that, if you are an architect, you are probably well past the point where you had to choose between becoming an architect or a manager, the typical two career paths after senior developer. When you are in an interview process, you want to be honest, but you don't want to overshare either. 


The best response here is to suggest that you worked hard for many years to become an architect, that you really love it, and can't imagine doing anything else. There is probably some truth to that for most of us. This line of reasoning is more credible as you get older. 



Can you walk us through the process of securing an important web site?


There is no one thing you can do to secure a web site.  Any individual measure that take can be defeated.  Consequently, you need to layer the security by taking multiple overlapping measures.  Notwithstanding the highly publicized security breaches of famous web properties, there is generally little fear of a web site being hacked if proper precautions have been taken. This logically follows from the fact that many famous financial firms have never been hacked despite vigorous efforts to do so.


While the topic is complex, there are some key points that make up the security infrastructure around a critical web site.  


There are some security measures that protect the consumer from a bad business, but we are limiting the answer to protecting the business's web site.


Policy.  All web security systems must combine policy with convenience to accommodate tradeoffs of security, expense, and convenience.  That is, protecting someone's blog from tampering will necessitate a less restrictive policy than protecting a nation's nuclear secrets.


Channel Security.  We need to make sure that we are encrypting the traffic to and from the web site so that hackers cannot intercept it and use it.  Fortunately, this is fairly easy to do with Secure Sockets Layer (SSL) technology.


Authentication.  Authentication and authorization are often confused, but they are separate concepts.  Authentication is simply the process of validating the identity of the person using the web site, typically using a username and password.  In most cases in 2018, username and password authentication is not considered sufficient for sensitive web sites like financials.  In these cases, "multifactor" authentication is typically used by asking challenge questions or sending someone a text message.  


Authorization.  Authorization happens after authentication.  Authorization simply determines what the web site user is allowed to do once they have accessed the site.  For instance, a system administrator would have different privileges than a standard user would.  A "guest" user would have fewer privileges still.  The typical method used to provide authorization is "Role-Based Access Security" (RBAC).


Audit.  Even if you have all the security mechanisms in the world, they are not really all that helpful unless you can keep records of what people are doing.  Audit is simply keeping track of what people are doing.


Internet Attacks.  There are a number of specialized considerations for the Internet that do not apply for other types of IT systems.  Because the IP network protocols and the HTTP application protocol do not support built-in security, layering it on gets very complex.  The risks cannot be eliminated, but they can be mitigated with careful study and review, but your resources to mitigate these problems need to be used with careful prioritization.  A good starting point for understanding these concerns would be to visit the "Open Web Application Security Project" at http://www.owasp.org


Operating System Security.  Each operating system or other operating environment will have different security configuration needs, but you will want to ensure that you are following security best practices for that operating environment.


Database Security.  Just as with operating systems, each database system will have different security configuration needs, but you will want to ensure that you are following security best practices for that database system.


Storage Security.  Data is most vulnerable where it is being stored.  Physical security is not sufficient since, at least in theory, someone unauthorized could some into possession of the storage device.  You will want to ensure that all data on storage devices is encrypted.   


Network Security.  Even if you have provided security for all other IT systems, you will want to provide protection against unauthorized intrusion on the network.  Components of this level of security include anti-malware, firewalls, DMZs, VPNs, network segmentation, analytics, and mobile security among other topics.


Vendor-Supplied Systems.  You generally want to be more biased towards buying trusted third-party components rather than designing and developing your own components.  The simple truth is that security systems have to be unbreakable and will thus take three times as much development effort to create as a typical line-of-business system.



How do You Handle DevOps?


What is DevOps?  DevOps has essentially evolved from what used to be (and sometimes still are) separate functions for software development and operations.  The evolution of DevOps as a a separate discipline simply reflects the effects of increasing scale, complexity, and efficiency for IT operations generally.  Through most of the history of IT, operations departments like networking and database administration were siloed and separate from software development, but the most complex problems for many IT disciplines were often referred by default to developers, who were assumed of being capable (or at least responsible for) of tying it all together. 


Insulate developers.  Software development is arguably the most complex and expensive part of corporate IT.  As a result, it is both efficient and cost-effective to isolate developers from operational concerns as much as possible so they can focus on coding.  The efficiency is more important than the cost-effectiveness - developers really need to focus on coding.  A good DevOps operation will do exactly that.  Such an team also isolates the operations teams from the developers.


Not Just About Development.  The term "DevOps" can be a little misleading because it seems to establish the focus on the software development.  While DevOps personnel do work closely with development and facilitate the success of the development team, a better better term might be "user technical advocate".  While DevOps personnel are not necessarily the first point of contact with a customer, their real focus is on improving the overall quality, timeliness, and responsiveness of delivery to the customer through technical operations.  The difference between DevOps and Ops is that the Ops team is dedicated to all kinds of IT support and initiatives while DevOps personnel employ skills that are specialized in bridging development and operations.


Fast Collaboration.  A key advantage of having a DevOps team that is separate from development is timeliness.  From a practical perspective, developers cannot make feature quality their top priority and customer experience their top priority.  DevOps can focus on customer experience and developers can focus on feature quality. 


Analytics - Measurements and Metrics.  To optimize the customer experience, you have to measure it.  There are potentially hundreds of different things to measure, analyze, report on, and optimize.  This is a key responsibility of the DevOps team.


Monitoring and Alerting.  Closely related to the analytics mentioned earlier is the topic of monitoring and alerting.  Many of the items we are measuring need to be reported on and acted upon if they are indicative of a problem.


Continuous Integration and Continuous Deployment.  This is perhaps the most important responsibility of the DevOps team.  As suggested earlier, customer responsiveness if key, so having clean automated tests, fast builds, and fast, reliable deployment of new fixes and features serves that customer responsiveness goal well.


Problem Triage.  In any complex system, there will be problems reported.  Some reports will be legitimate and some will not.  Some problems will result from the software and some from the environment.  The only thing you can rest assured of is that there will be problem reports.  The DevOps team will likely not be the first point of contact for problem reports, but they will likely resolve a lot of problems and will typically be responsible for routing problems that they don't solve to other teams, to include the software development team.


Request Management.  Most software organizations have a uniform means of tracking requests and issues.  It is typically the responsibility of the DevOps team to maintain this system.


Agile.  While not mandatory to the success of the DevOps team, it is a best practice for DevOps to adopt agile methodologies for managing their workstream.



How do you handle validation of functional requirements?


This question is not about creating requirements.  Creating requirements is typically the job of a business analyst, but architects do need to validate the requirements. Creating functional requirements is probably more art than science and requires more knowledge about the business domain than about technology.


Functional vs. Nonfunctional Requirements.  "Functional" requirements are those that meet business needs directly.  "Nonfunctional" or "implicit" requirements are those that are inherent in the information technology design.  They are often deemed "implicit" requirements because they do not come from the business staff, but are typically specified by the architect.


"Validate" for Completeness and Correctness.  Requirements obviously need to specific what the system needs to do.  It is also important to specify what not to do and what to do when there is a problem.  A well-written requirement typically follows a "temporal" models in which preconditions, actions, and postconditions are considered.


User-Centric.  The only people that can really tell you what the requirements of the system need to be are the people that are using the system.  For large sites and systems, the users may be represented by a marketing team, but the key point is that the designers of the system can't simply make assumptions about requirements.  Instead, business stakeholders need to be closely involved with developing requirements.


Avoid Ambiguity.  Each requirement should only be about one thing.  This may be in the form of a paragraph or a checklist - we are not suggesting what the format of the requirements should be.  The point here is that well-written requirements should be represented as individual concepts.


Consistency.  Requirements should not be conflicting with one another.  This happens more often than it should.


Verifiable. We will ideally have some way to verify the requirements in a very concrete way.  Generally speaking, it is ideal to "prototype" the design with concrete code and let the users validate the prototype.


Business Case.  This will likely be addressed by someone other than the architect, but each requirement should produce some measurable benefit.  It is not unusual for features to be requested or specified based on little more than speculation, but those features are the ones that are often cancelled or just waste a lot of time and add complexity.  When developers and pther technical staff ad such features, this is often referred to as "gold plating".  While such features may make a lot of sense to the developer, it is the customer and whether or not they will pay for the feature that is important.


Technical Consensus on Realism.  Business stakeholders often ask for features that are unrealistic.  We would all like to have anti-gravity machines, but actually asking for one is impractical.  Even if you as the architect feel that a feature request is realistic, you should validate that with the technical staff who are actually doing the work before overcommitting to the feature.



Can you give an overview of using Angular?


What is Angular? Angular is a front-end platform with OO features bridging OO and the web, which is no small feat. It works by allowing HTML to be extended with custom elements and custom attributes by joining JavaScript and the HTML DOM into a single programming model.


Angular vs. AngularJS.  Angular and AngularJS are a bit different. AngularJS is a JavaScript platform and is essentially Angular 1.x. Angular 2+ is a rewrite and is dependent on TypeScript, which is essentially an enhanced version of JavaScript with OO features.


Components. Angular applications are made up of “components”. These components are classes with methods and properties. The classes are decorated with metadata for CSS styling, HTML display templates, and a “selector” to indicate how to incorporate the component with a custom element. 


Binding. Binding allows one-way and two-way communication of the DOM to the Components. One-way binding of component class properties to content is accomplished with “interpolation” with a double curly brace syntax. Binding of component properties to the DOM elements uses a square bracket syntax. Binding of DOM events to component class methods leverages a parentheses syntax. Two- way binding leverages both the square brackets and parentheses inside that, the so-called “banana in a box” syntax.


Modules. As your application scales, you will likely want to organize it into feature-specific modules. This is no different than organizing code into libraries in other languages and platforms.


Services. You can and should create services (not to be confused with web services or REST services) to introduce functionality that should be independent from components, should be shared among multiple components, or encapsulates external interactions. Services may wrap web services or REST services, but the concept of Angular services is a bit broader.


Additional Techniques. There are additional special techniques for forms, routing, HTTP binding, and other less prominent techniques.



Can you describe how you would move an existing system to the cloud?

  

A DevOps Project. This will be a DevOps project rather than a development project. Some developer support will be needed, but a project for moving an “existing” system to the cloud rather than creating a new system is going to be more about operational concerns rather than development concerns.


What is the End Goal? The end goal is not about the technology, but how to support the business. To that end, you will want to have the existing system up in the cloud in a managed, secure, and cost-effective way. The cost-effectiveness is key – after all, there is no real point in doing the migration unless it is going to contribute to the organization’s profitability.


Four Key Steps. You first need to complete general assessment and planning, then do the actual migration, then make sure you are doing it cost-effectively. Last, but not least, you will need to make sure that the new cloud environment is secure and has appropriate management controls applied to it.


Assessment. Before you can move forward, you need to understand what you already have for applications, data, and infrastructure. You will also need to understand the dependencies between the various components and work out which elements should be migrated to the cloud first.


Migration. Migrating things to the cloud does not necessary mean you are ready to run them in production yet, but all the major cloud vendors provide numerous tools to ease the transition and to run in a “hybrid” fashion with mixed on-premise and cloud infrastructure. Physically moving things to the cloud is actually the easiest of the four main steps. You can essentially consider this step a “prototyping” step.

We will discuss several types of migration, from the simplest to the most comprehensive.  In practice, a large organization may pursue all four types of migration concurrently for different applications.


Optimization. There will be several objectives to pursue with optimization. Most importantly, you will want to focus on cost-effectiveness. This means focusing on total cost of ownership, which is a bit more complicated than just looking at the raw billing figures from your cloud provider. 


It will also be necessary to take a hard look at performance during this phase. Your applications presumably need to be as fast as when they were running in your own data center. This cannot be assumed, but instead must be tested. It is also important to bear in mind that better performance means more expense in a cloud environment, but that is true in an on-premises environment as well. 


Lastly, there will typically be some non-negotiable standards for your organization. This is the appropriate time to adjust your environment to meet those standards.


Security and Management. By management, we simply mean the standard types of monitoring and alerting that are done for on-premise installations as well. These are essentials, of course, so it may seem curious to leave them for last and instead for on the assessment, migration, and optimization first. The simple fact is that the security and management aspects are “deterministic” tasks – that is, once you have made a decision to actually move to the cloud, you know that you can secure and management infrastructure in a timely and cost-effective fashion. Security and management is somewhat different for the cloud than it is for on-premise environments, but it is predictable.


Assessment: Discovery. The first part of an assessment is simply to figure out what you have. To someone who does not work in IT, this may sound like a silly task. Truth is, even modestly-sized IT firms typically have dozens of complex applications with all kinds of links and communications protocols to one another. Just the inventory of such applications can prove difficult. For Fortune 500 organizations, it is reasonable to assume that getting an inventory of 100% of the applications is impossible when mergers and acquisitions are accounted for. Fortunately, there are some tools provided by cloud vendors in addition to inventory tools that organizations already have deployed that can help a lot with discovery.


Assessment: Map On-Premise Applications. Once you have the results of the discovery, you want to diagram all the servers and the dependencies and communications among them. Once you understand the dependencies and communications between all of the servers, you can group them into logical clusters typically supporting applications. Then each of those sets of applications can be considered for a separate migration plan for each set.


Assessment: Evaluate. For each of the application server groups, figure out how best to leverage Azure resources to do so. Vendors provide tools that will help recommend such strategies. Most importantly, evaluate the total cost of ownership (TCO) for these server sets by comparing them with comparable Azure installations. If the TCO cannot be realized, then perhaps that set of servers would not be migrated.


Migration: Rehost. This is a so-called “lift and shift” in which you essentially move your on-premises servers to equivalent servers in the cloud. There is no coding involved with this alternative. The capabilities of your applications will not change – just the location of the servers.  This is a pure “Infrastructure as a Service” (IaaS) solution.


Migration: Refactor. Ideally, we want to use “Platform as a Service” (PaaS) functionality. We can refactor to use some simple services such as cloud database services without changing the overall design of the application. This would involve some coding changes, but those are generally straightforward changes that don’t require tough decisions.


Migration: Rearchitect. In this case, we would lightly redesign the application to take advantage of some of the architectural advantages that the cloud offers, especially around scalability and DevOps, but otherwise keeping the original functionality of the application.


Migration: Rebuild. If you want to take advantage of some of the functionality that your cloud vendor offers, but that you do not already have, then you would, of course, need to rebuild the entire application. These capabilities would typically include things like artificial intelligence, blockchain, and Internet of Things (IoT). More compelling, however, is to leverage a completely “cloud native” application strategy, at which point you don’t need servers, infrastructure, or licenses.


Optimize: Analyze.  Spending on cloud resources can easily get out of hand and invalidate the key intended purposes of a migration – cost-effectiveness. You can use certain tools to help with this, but much of this is just detailed analysis to figure out where you can trim back.


Optimize: Save. In addition to trimming spending, cloud vendors typically offer deals for longer term arrangements when you are ready to commit. 


Optimize: Reinvest. Once you have saved some money on the cloud migration, funds are freed up to do more.


Cloud Management and Security: Security. Cloud vendors have security centers built in for unified security management and advanced threat protection. 


Cloud Management and Security: Data Protection. Backup is easy to set up.


Cloud Management and Security: Cloud Health Monitoring. Cloud vendors are set up with overlapping systems to monitor everything.