Get Started. It's Free
or sign up with your email address
SA by Mind Map: SA

1. Quality attributes

1.1. Design Qualities

1.1.1. Conceptual Integrity

1.1.1.1. description

1.1.1.1.1. Conceptual integrity defines the consistency and coherence of the overall design. This includes the way that components or modules are designed, as well as such factors as the coding style and variable naming.

1.1.1.2. Measurable metrics

1.1.1.2.1. List of design patterns and styles to be used

1.1.1.2.2. Afferent coupling (Ca): value. The number of types outside this assembly that depend on types within this assembly. High afferent coupling indicates that the concerned assemblies have many responsibilities.

1.1.1.2.3. Efferent coupling (Ce): The number of types outside this assembly used by child types of this assembly. High efferent coupling indicates that the concerned assembly is dependent. Notice that types declared in third-party assemblies are taken into account.

1.1.1.2.4. Etc

1.1.2. Maintainability

1.1.2.1. Description

1.1.2.1.1. Maintainability is the ability of the system to undergo changes with a degree of ease. These changes can impact components, services, features, and interfaces when adding or changing the functionality, fixing errors, and meeting new business requirements.

1.1.2.2. Measurable metrics

1.1.2.2.1. Cyclomatic Complexity (CC): Cyclomatic complexity is a popular procedural software metric equal to the number of decisions that can be taken in a procedure. Methods where CC is higher than 15 are hard to understand and maintain. Methods where CC is higher than 30 are extremely complex and should be split into smaller methods (except if they are automatically generated by a tool). Recommended threshold value: 20

1.1.2.2.2. Type size. How many lines of code are in type definition file. Recommended threshold value: 200

1.1.2.2.3. Percentage of comments. Code where the percentage of comment is lower than 20% should be more commented. However overly commented code (>40%) is not good as well. This metric is computed with the following formula: PercentageComment = 100*NbLinesOfComment / ( NbLinesOfComment + NbLinesOfCode)

1.1.2.2.4. Efferent Coupling at type level (Ce). The Efferent Coupling for a particular type is the number of types it directly depends on. Notice that types declared in third-party assemblies/libraries are taken into account. Types where TypeCe > 50 are types that depends on too many other types.

1.1.2.2.5. Etc

1.1.3. Reusability

1.1.3.1. Description

1.1.3.1.1. Reusability defines the capability for components and subsystems to be suitable for use in other applications and in other scenarios. Reusability minimizes the duplication of components, and the implementation time.

1.1.3.2. Measurable metrics

1.1.3.2.1. List of exact components/libraries that should be re-usable;

1.1.3.2.2. Re-usable code base: percentage

1.1.3.2.3. Etc

1.2. Run-time Qualities

1.2.1. Availability

1.2.1.1. Description

1.2.1.1.1. Availability defines the proportion of time that the system is functional and working. It can be measured as a percentage of the total system downtime over a predefined period. Availability is affected by system errors, infrastructure problems, malicious attacks, and system load.

1.2.1.2. Measurable metrics

1.2.1.2.1. Availability % (not including planned downtime). See https://en.wikipedia.org/wiki/High_availability#Percentage_calculation for more details;

1.2.1.2.2. Planned downtime (mins per day/week/month);

1.2.1.2.3. Time required to update software/hardware on the running system: # of mins

1.2.1.2.4. Etc

1.2.2. Interoperability

1.2.2.1. Description

1.2.2.1.1. Interoperability is the ability of a system or different systems to operate successfully by communicating and exchanging information with other external systems written and run by external parties. An interoperable system makes it easier to exchange and reuse information internally as well as externally.

1.2.2.2. Measurable metrics

1.2.2.2.1. List of exact supported integration protocols/standards: list

1.2.2.2.2. Backward compatibility for integration API: percentage;

1.2.2.2.3. Ability to support multiple versions of SOAP (at least 1.1 and 1.2) APIs: yes/no

1.2.2.2.4. Integration API breaking changes: percentage

1.2.2.2.5. Etc

1.2.3. Manageability

1.2.3.1. Description

1.2.3.1.1. Manageability defines how easy it is for system administrators to manage the application, usually through sufficient and useful instrumentation exposed for use in monitoring systems and for debugging and performance tuning.

1.2.3.2. Measurable metrics

1.2.3.2.1. System logs are collected: yes/no

1.2.3.2.2. Level of logging may be changed in a runtime: yes/no

1.2.3.2.3. Troubleshooting tools exists, actual, well-documented and well-known to administrators: yes/no

1.2.3.2.4. The system is monitored by a 3rd party tools: yes/no

1.2.3.2.5. Exact list of information to be collected/traced/monitored for diagnostics and for administration/troubleshooting: list

1.2.4. Performance

1.2.4.1. Description

1.2.4.1.1. Performance is an indication of the responsiveness of a system to execute any action within a given time interval. It can be measured in terms of latency or throughput. Latency is the time required to respond to any event. Throughput is the number of events that take place within a given amount of time.

1.2.4.2. Measurable metrics

1.2.4.2.1. Estimated number of end-users by locations (total): #

1.2.4.2.2. Estimated number of concurrent users per location (average/peak): #

1.2.4.2.3. Data storage size estimated value: Gbytes/Pbytes

1.2.4.2.4. Data storage size estimated growth per year: Gbytes/Pbytes

1.2.4.2.5. Number of records/documents/ etc in data storages: # of records/documents/etc

1.2.4.2.6. Mean time of page load (for web applications): ms

1.2.4.2.7. Mean time of function call (for web services): ms

1.2.5. Reliability

1.2.5.1. Description

1.2.5.1.1. Reliability is the ability of a system to remain operational over time. Reliability is measured as the probability that a system will not fail to perform its intended functions over a specified time interval.

1.2.5.2. Measurable metrics

1.2.5.2.1. Failure rate. The frequency with which an engineered system or component fails, expressed in failures per unit of time.

1.2.5.2.2. MTTF. The average time from start of operation until the time when the first failure occurs.

1.2.5.2.3. MTTR. A measure of the average time required to restore a failing component to operation.

1.2.5.2.4. MTBF. The time from the start of operation until the component is restored to operation after repair.

1.2.5.2.5. Time to switch to disaster recovery environment: # of secs

1.2.5.2.6. Number of severity Critical and High customer-reported bugs: #

1.2.6. Scalability

1.2.6.1. Description

1.2.6.1.1. Scalability is ability of a system to either handle increases in load without impact on the performance of the system, or the ability to be readily enlarged.

1.2.6.2. Measurable metrics

1.2.6.2.1. System architecture allows horizontal scaling: yes/no

1.2.6.2.2. Time needed to scale up/down the system: # of secs/mins

1.2.6.2.3. Scaling limits(number of servers, network bandwidth, disk space etc.) sufficient for the business domain: list

1.2.6.2.4. Exact solution components to be able to scale out

1.2.6.2.5. Exact scale out conditions

1.2.6.2.6. Ability to scale out Product (addressing growth volume of transaction, number of custom content, response time, number of libraries).

1.2.7. Security

1.2.7.1. Description

1.2.7.1.1. Security is the capability of a system to prevent malicious or accidental actions outside of the designed usage, and to prevent disclosure or loss of information. A secure system aims to protect assets and prevent unauthorized modification of information.

1.2.7.2. Measurable metrics

1.2.7.2.1. PII security scenarios: list

1.2.7.2.2. Ability of system to detect DDoS attacks: yes/no

1.2.7.2.3. Ability of system to react on DDoS attacks: yes/no

1.2.7.2.4. User access is restricted according to authentication/authorization: yes/no

1.2.7.2.5. Ability to prevent SQL injections: yes/no

1.2.7.2.6. Ability to prevent XSRF/CSRF: yes/no

1.2.7.2.7. Secured connection: yes/no

1.2.7.2.8. Passwords encryption: yes/no

1.2.7.2.9. Ability to do audit and log all user interaction for application critical operations: yes/no

1.2.7.2.10. Sensitive data security (encryption, not logging, passing by secure channels only, closed for unauthorized access): yes/no

1.3. System Qualities

1.3.1. Supportability

1.3.1.1. Description

1.3.1.1.1. Supportability is the ability of the system to provide information helpful for identifying and resolving issues when it fails to work correctly.

1.3.1.2. Measurable metrics

1.3.1.2.1. Mean time to identify the root cause of bugs/issues: secs/mins

1.3.1.2.2. Etc

1.3.2. Testability

1.3.2.1. Description

1.3.2.1.1. Testability is a measure of how easy it is to create test criteria for the system and its components, and to execute these tests in order to determine if the criteria are met. Good testability makes it more likely that faults in a system can be isolated in a timely and effective manner.

1.3.2.2. Measurable metrics

1.3.2.2.1. Unit tests coverage: percentage

1.3.2.2.2. Integration tests coverage: percentage

1.3.2.2.3. Exact list of required test environments (functional testing env., performance testing env., security testing env., etc.) ): list

1.3.2.2.4. Exact list of test approaches to be used (manual/automated, unit, end-2-end, regression, integration): list

1.3.3. Auditability

1.3.3.1. Description

1.3.3.1.1. Ability of system to provide audit trails in order to track system users' operations

1.3.3.2. Measurable metrics

1.3.3.2.1. List of operations that should leave audit trail in 100% of cases

1.3.3.2.2. Exact parameters about users and their activities to be recorded for audit purposes

1.3.4. Deployability

1.3.4.1. Description

1.3.4.1.1. Ability of system to provide deployments

1.3.4.2. Measurable metrics

1.3.4.2.1. Deployment downtime: secs/mins

1.4. User Qualities

1.4.1. Usability

1.4.1.1. Description

1.4.1.1.1. Usability defines how well the application meets the requirements of the user and consumer by being intuitive, easy to localize and globalize, providing good access for disabled users, and resulting in a good overall user experience

1.4.1.2. Measurable metrics

1.4.1.2.1. Reference to specific UI/UX guideline to be followed

1.4.1.2.2. List of devices to be supported (if applicable)

1.4.1.2.3. List of resolutions to be supported (if applicable)

1.4.1.2.4. List of OS versions to be supported

1.4.1.2.5. List of browsers/versions to be supported (if applicable)

1.4.1.2.6. List of locales/cultures to be supported

1.4.1.2.7. Support of Section 508 (ability to operate for people with disabilities): yes/no

1.4.1.2.8. Accelerators like hotkeys, 'suggestion lists', etc: list

1.4.1.2.9. Number of clicks to access particular functionality in UI: # of clicks

1.4.1.2.10. Mean time, required to average user to get used with the system: # of mins

2. Architecture definitions

3. Architectural Patterns

3.1. Layered

3.1.1. Advantages

3.1.1.1. A lower layer can be used by different higher layers. Layers make standardization easier as we can clearly define levels. Changes can be made within the layer without affecting other layers

3.1.2. Disadvantages

3.1.2.1. Not universally applicable. Certain layers may have to be skipped in certain situations.

3.2. Client-serve

3.2.1. Advantages

3.2.1.1. Good to model a set of services where clients can request them.

3.2.2. Disadvantages

3.2.2.1. Requests are typically handled in separate threads on the server. Inter-process communication causes overhead as different clients have different representations.

3.3. Master-slave

3.3.1. Advantages

3.3.1.1. Accuracy - The execution of a service is delegated to different slaves, with different implementations.

3.3.2. Disadvantages

3.3.2.1. The slaves are isolated: there is no shared state. The latency in the master-slave communication can be an issue, for instance in real-time systems. This pattern can only be applied to a problem that can be decomposed.

3.4. Pipe-filter

3.4.1. Advantages

3.4.1.1. Exhibits concurrent processing. When input and output consist of streams, and filters start computing when they receive data. Easy to add filters. The system can be extended easily. Filters are reusable. Can build different pipelines by recombining a given set of filters

3.4.2. Disadvantages

3.4.2.1. Efficiency is limited by the slowest filter process. Data-transformation overhead when moving from one filter to another.

3.5. Broker

3.5.1. Advantages

3.5.1.1. Allows dynamic change, addition, deletion and relocation of objects, and it makes distribution transparent to the developer.

3.5.2. Disadvantages

3.5.2.1. Requires standardization of service descriptions.

3.6. Peer-to-peer

3.6.1. Advantages

3.6.1.1. Supports decentralized computing. Highly robust in the failure of any given node. Highly scalable in terms of resources and computing power.

3.6.2. Disadvantages

3.6.2.1. There is no guarantee about quality of service, as nodes cooperate voluntarily. Security is difficult to be guaranteed. Performance depends on the number of nodes.

3.7. Event-bus

3.7.1. Advantages

3.7.1.1. New publishers, subscribers and connections can be added easily. Effective for highly distributed applications.

3.7.2. Disadvantages

3.7.2.1. Scalability may be a problem, as all messages travel through the same event bus

3.8. MVC(MVVM, MVP)

3.8.1. Advantages

3.8.1.1. Makes it easy to have multiple views of the same model, which can be connected and disconnected at run-time.

3.8.2. Disadvantages

3.8.2.1. Increases complexity. May lead to many unnecessary updates for user actions.

3.9. Blackboard

3.9.1. Advantages

3.9.1.1. Easy to add new applications. Extending the structure of the data space is easy.

3.9.2. Disadvantages

3.9.2.1. Modifying the structure of the data space is hard, as all applications are affected. May need synchronization and access control.

3.10. Interpreter

3.10.1. Advantages

3.10.1.1. Highly dynamic behavior is possible. Good for end user programmability. Enhances flexibility, because replacing an interpreted program is easy.

3.10.2. Disadvantages

3.10.2.1. Because an interpreted language is generally slower than a compiled one, performance may be an issue.

4. SEI BEST PRACTICES

4.1. Architectural Concept

4.1.1. Quality attributes

4.1.1.1. Comment:

4.1.1.1.1. Quality attributes are one of key SEI concepts. It helps to formalize key quality areas of the solution that are very important for the customer. Such key areas must be addressed by solution architecture and implementation, otherwise solution will be far behind customer expectations about it. This is the only reasonable way to evaluate Solution Architecture.

4.1.1.1.2. Quality attributes help to define architecturally significant drivers.

4.1.1.1.3. Quality attributes may be used to understand if the architecture solves business needs.

4.1.1.2. Recommendations

4.1.1.2.1. Strongly recommended for usage during architecture design or review activities in pair with ASR concept.

4.1.1.2.2. Quality attributes should be defined for every single solution which is under evaluating or initial design. You need put as many relevant attributes at the very earlier stages of any process.

4.1.1.2.3. It's a must, but it's not the only thing we need. Yes, we have a formal list of quality attributes, you can start your work defining those, but this is not all. Your job is not just collect these quality attributes, but define and quantify the properties of the system as well.

4.1.1.2.4. Everything can be a QA, hence try to be bit more free thinker. If the customer says the UI must be beautiful, then this can be a quality attribute.

4.1.1.2.5. Don't limit to the QA list that SEI describes. It can be extended with additional quality attributes if any.

4.1.1.2.6. You need to break quality attributes down to particular measurable metrics and trace them into all involved components of your solution. In this case, every stakeholder (and, what is important for qualitative implementation - development team members!) will understand solution quality requirements in the same way. As a result, you will have clear and simple quality criteria to prove quality to customer and give basis to development team.

4.1.1.2.7. For example, if performance is one of key quality attributes for your web application, you need to introduce set of particular and clear metrics on it, like mean page load time under normal operation, page load time during peak load, mean and peak numbers of page requests per sec, etc. After that you need to describe what it means for all involved components, i.e. metrics for web application itself, metrics for web services behind it and metrics for database on the backend.

4.1.2. Architecturally Significant Requirements (ASRs)

4.1.2.1. Comment

4.1.2.1.1. To reason about the architecture of a particular system, an architect must determine key requirements which influence the architecture. This concept selects the most important requirements from the list of identified requirements (functional and non-functional). This helps to understand what must be addressed by solution architect in order to build the solution that will fit customer expectations and business needs.

4.1.2.1.2. It is essential for architect to distinguish ASRs in a large scope of requirements. A few small from functional perspective SRs can become signification and breaking to architecture.

4.1.2.2. Specifics

4.1.2.2.1. This concept is very sensitive to availability of key stakeholders for direct contact and discussions.

4.1.2.3. Alternative practices

4.1.2.3.1. best guess of SA about ASRs if stakeholders are not available.

4.1.2.4. Recommendations

4.1.2.4.1. 1. This concept is strongly recommended for usage during any architecture design or architecture review activities.

4.1.2.4.2. 2. This concept can be used as a core approach during requirements gathering and evaluation.

4.1.2.4.3. 3. This concept helps to prioritize quality attributes as well. You definitely have to use it, as one quality attribute can be more important than the other. All quality attributes can be affecting your architecture but the emphasis is on "significant". Order them and then cut the list, as most of the quality attributes only have a negligible effect. This will make your work much-much easier.

4.1.3. Key stakeholders

4.1.3.1. Comment

4.1.3.1.1. Architects build solutions not for fun. It is important to stay pragmatic and focus on business drivers. Stakeholders are important source of insights which help to build system that is a right fit and solves business problem. It's important to treat stakeholders as source of insights and requirements, but not solutions.

4.1.3.1.2. Key stakeholders are the persons who influence on Solution Architecture and the solution is built for them. They are people who are the source of requirements for the particular solution. Furthermore, these people are key for understanding ASRs and key quality attributes as well.

4.1.3.2. Specifics

4.1.3.2.1. Key stakeholders are not always available for SA during the work;

4.1.3.2.2. It usually a challenge to form the work with stakeholders in the most efficient way to get their comments on requirements.

4.1.3.2.3. It is not easy to identify right stakeholders sometimes. What is more frustrating is that sometimes you think that you speak to the right stakeholder, which is not the truth. So understanding the full list of stakeholders and defining key of them is the critical for success.

4.1.3.3. Alternative practices:

4.1.3.3.1. if stakeholders are not available:

4.1.3.3.2. SA can make best guess about involved stakeholders roles;

4.1.3.3.3. After that, SA can make best guesses about ASRs for those roles.

4.1.3.4. Recommendations

4.1.3.4.1. you use concepts of quality attributes and ASRs - you will need this as well, because you will need external input from the customer side to define the quality attributes and the ASRs. Alternatively, you can design in vacuum, but then the result will be sub-optimal (unless you know everything about the required solution, but then chances are, that you are a key stakeholder).

4.1.3.4.2. Communication with stakeholders is one of major source of requirements/concerns. You definitely need to use as much as you can in every specific case.

4.1.3.4.3. It is strongly recommended to use this concept when you are working with solution requirements.

4.1.4. Software Product Line

4.1.4.1. Comment

4.1.4.1.1. This concept is helpful when you are building similar products for the same client. It also helps you recognizing that the customer (unintentionally) _started_ building a product line, so your architecture can take this into account.

4.1.4.1.2. This concept can save a lot of money for company that will build few products on the same basis and this can be a driver for adoption of it by customer.

4.1.4.2. Specifics

4.1.4.2.1. This concept seems to be very efficient for product companies.

4.1.4.2.2. Not all customers actually plan to have product line. It is pretty often situation when customer need just custom solution and that's it - in this case product line concept should not be applied, because overhead will never be returned back.

4.1.4.2.3. Also, this concept is too specific and in most cases will not be applicable in EPAM practice, because EPAM customers usually need pretty specific custom solutions which hardly can be treated as a product line.

4.1.4.3. Recommendations

4.1.4.3.1. It is recommended to keep in mind this concept.

4.1.4.3.2. Use this concept if you see that a designed solution most likely will be the re-used in the future for building other solutions for the same customer (and getting product line as a result).

4.1.4.3.3. Use this concept when you work with one of solutions of product company.

4.1.5. Architecture Influence Cycle

4.1.5.1. comment

4.1.5.1.1. It is important for an architect to understand how the architecture will impact future changes, customer business transformations and so on. The concept is mainly focused on the fact that there is a cycle of influence between architecture, system, technical environment, stakeholders etc.

4.1.5.1.2. This concept helps understand the context of SA work better.

4.1.5.2. Recommendations

4.1.5.2.1. It is recommended to be aware of this concept in order to understand the context of SA work better. This concept seems to be an interesting observation, which don't have practical application rather than "just think about the future".

4.2. Architectural Practice

4.2.1. Quality Attribute Workshop (QAW)

4.2.1.1. Comment

4.2.1.1.1. The QAW is a facilitated, stakeholder-focused method to generate, prioritize, and refine quality attribute scenarios before the software architecture is completed;

4.2.1.1.2. This is efficient practice for:

4.2.1.1.3. understanding solution quality areas that are critical for key stakeholders on customer side;

4.2.1.1.4. forming the list of architecturally significant quality attributes.

4.2.1.2. Specifics

4.2.1.2.1. This practice is very sensitive to availability of stakeholders;

4.2.1.2.2. You need to have direct contact with stakeholders;

4.2.1.2.3. This practice is time-consuming for stakeholders which my be inappropriate for them (because you need to get them together in one room for 1-2 day discussion).

4.2.1.2.4. You need to follow ADDM

4.2.1.2.5. So, summarizing: QAW is an excellent example of academic approach which not always works in reality. Architect should know about this practice and try to use it when possible.

4.2.1.3. Alternative practices

4.2.1.3.1. PALM; Utility Tree.

4.2.1.4. Recommendations

4.2.1.4.1. This practice gives impressive outcome. It is recommended to use it if (1) stakeholders are available and (2) it is possible to spend some time (1-2 days) with stakeholder working in QAW format. Most likely, you should be onsite with customers for applying this practice.

4.2.1.4.2. This practice is perfect for onsite discovery phase,when you have direct access to stakeholders and chance to conduct workshop offline. As a side effect, this practice can help you with preparation of agenda or action plan during onsite discovery phase.

4.2.1.4.3. This practice is not recommended to short-time activities, like few days pre-sale, because it is time-consuming.

4.2.1.4.4. You can use this practice when you need to identify key quality attributes for the solutions during architecture design or architecture review as well if (1) stakeholders are available and (2) it is possible to spend some time (1-2 days) with stakeholder working in QAW format..

4.2.2. Attribute Driven Design Method (ADDM)

4.2.2.1. Comment

4.2.2.1.1. This is practice helps to align SA efforts in order to build the solution with required qualities. At the same time, it should go together with other design approaches as a part of more general design process.

4.2.2.1.2. This looks like a generic and common approach which could be used during architecture design. It may have extra costs for activities like pre-sales when you need high-level conceptual architecture. But, if you're working on architecture design, you may use this method.

4.2.2.2. Specifics

4.2.2.2.1. ADDM focused on addressing particular ASRs, whilst high-level design approach should still direct design efforts.

4.2.2.2.2. ADDM depends on ASRs/Quality attributes

4.2.2.3. Recommendations

4.2.2.3.1. It is recommended to use this practice as a part of general architecture design process. Use it when you need to make sure that you have covered all ASRs with your design decisions.

4.2.2.3.2. Practice is not recommended when: (1) there is no access to requirements, (2) no access to stakeholders

4.2.3. Quality Attribute Scenarios

4.2.3.1. Comment

4.2.3.1.1. This practice helps to model any quality attribute in a pretty simple and standard way for any solution and for any requirements. As a result, it helps to make any quality attribute to be testable. It helps in the brainstorming, discussion, and capture of quality attribute requirements for a system.

4.2.3.1.2. Without this practice every stakeholder will understand qualities of the future solution in a different way. So, one of additional advantages of ADDM is efficient communication among stakeholders and good consensus on the ASRs for the future solution.

4.2.3.1.3. ADDM helps to understand the quality attributes and ASRs. At the same time, it take a time and effort from side of SA and solution stakeholders.

4.2.3.1.4. Though it might look overfed, it helps to avoid requirements gaps and unseen risks.

4.2.3.2. Specifics

4.2.3.2.1. Practice is pretty time-consuming for SA and for stakeholders. It will take some time to model quality attributes.

4.2.3.3. Recommendations

4.2.3.3.1. It is strongly recommended to use it always when you addressing particular key quality attributes in details in your future solution architecture.

4.2.3.3.2. Use this practice when solution stakeholders can invest the time to follow this practice (few meetings, each is couple of hours long, but most of the stakeholders should be there). Alternatively you can try it offline (emails, wiki pages, etc), but coordination will require a lot of effort from you.

4.2.3.3.3. In general, it seems that this very powerfull practice is near to impossible to implement this on practice on typical EPAM engagements because of its specifics.

4.2.3.3.4. On regular EPAM engagements you can use this practice to address very limited set of key quality attributes (1-2) of your future solution just in order to get versatile view on quality attribute and ways to address it in details.

4.2.4. Utility tree

4.2.4.1. Comment

4.2.4.1.1. This practice helps to push from system utilization in order to understand business value and efforts for implementation of ASRs.

4.2.4.1.2. It is useful to see whole the picture of ASRs in a compact way. It will help to discuss ASRs with stakeholder in a limited timeframe.

4.2.4.1.3. Helps to structure ASRs and quality attributes in a better way.

4.2.4.2. Specifics

4.2.4.2.1. Direct access to key stakeholders is required in order to estimate business value of ASRs.

4.2.4.2.2. Takes some additional tim

4.2.4.3. Specifics

4.2.4.3.1. It is recommended this practice if you need to cut prioritize ASRs and if you need to reduce amount of work (by removing ARSs that are too expensive or not valuable for business - after discussion with customer and getting approval for this).

4.2.4.3.2. You can use this practice to create a mind map for Quality Attributes evaluation.

4.2.4.4. Recommendations

4.2.4.4.1. It is recommended this practice if you need to cut prioritize ASRs and if you need to reduce amount of work (by removing ARSs that are too expensive or not valuable for business - after discussion with customer and getting approval for this).

4.2.4.4.2. You can use this practice to create a mind map for Quality Attributes evaluation.

4.2.5. Quality Attributes Modelling

4.2.5.1. Comment

4.2.5.1.1. Modeling solution qualities means do quantitative modeling and analysis for them. As a result, you can get pretty exact understanding of quality attribute metrics, their threshold values and get clear reply on how to achieve the required metrics values (and address the quality attribute as a result).

4.2.5.1.2. The advantage of this practice is a massive focus on particular quality attribute in very deep details, which make sense of you need to be 100% sure that this quality attribute is achievable in your solution according to required metrics from customer.

4.2.5.2. Specifics

4.2.5.2.1. This practice is time-consuming and sensitive to mathematical capabilities of SA.

4.2.5.3. Alternative practices

4.2.5.3.1. Regular prototyping/POC activity.

4.2.5.4. Recommendations

4.2.5.4.1. Use this practice on proof-of-concept activities.

4.2.5.4.2. Use this practice when you need to address key quality attributes for mission-critical solutions in very deep details.

4.2.5.4.3. It is not possible to spend time on such activities in typical rush when doing pre-sales or discovery.

4.2.6. Quality Attribute Tactics

4.2.6.1. Comment

4.2.6.1.1. Architectural tactics are good way to standardize and unify work on addressing particular quality attributes. This make sense when you need to introduce single vision on quality attributes and the way you address them for stakeholders.

4.2.6.1.2. This practice help you to design the system properly if you already defined quality attributes. Most of the tactics are wide-used and obvious.

4.2.6.2. Specifics

4.2.6.2.1. The more tactics you combine to achieve some quality attribute for your solution - the more expensive solution will be. So, it make sense to find golden mean in every particular case.

4.2.6.2.2. For some architects this SEI practice looks inconvenient. Most of architects already do it, maybe not this formally or rigidly. Such colleagues motivate this with something like ""I personally put more stress on past experience or things I've read about solving an issue".

4.2.6.2.3. There is a concern that this practice can be scaled up when you need to evaluate all the possible tactics for large set of quality attributes.

4.2.6.3. Alternative practices

4.2.6.3.1. Ad hoc design.

4.2.6.4. Recommendations

4.2.6.4.1. This practice can be used as a check list for addressing quality attributes.

4.2.6.4.2. Quality attribute tactics can help you bootstrap your solution design.

4.2.6.4.3. The practice is recommend to be used when you need to discuss ways of addressing particular key quality attributes.

4.2.6.4.4. It is recommended to combine tactics in order to address key quality attributes as efficient as possible.

4.2.7. Architecture Trade-off Analysis Method (ATAM)

4.2.7.1. Comment

4.2.7.1.1. ATAM can be used to evaluate software architectures in different domains. It is designed so that evaluators need not be familiar with the architecture or its business goals, the system need not yet be constructed, and there may be a large number of stakeholders.

4.2.7.1.2. The idea of this practice - to present the architecture together with business context - sounds absolutely valuable.

4.2.7.2. Specifics

4.2.7.2.1. This practice is time- and money-consuming as it involves project decision makers, stakeholders and review team of few people in the work for few weeks.

4.2.7.2.2. Practice is sensitive to availability of project decision makers and stakeholders.

4.2.7.3. Alternative practices

4.2.7.3.1. Lightweight Architecture Evaluation;

4.2.7.3.2. EPAM Architecture Review Process.

4.2.7.4. Recommendations

4.2.7.4.1. is practice is recommended to use this practice for architecture assessment (review) large and costly projects only.

4.2.7.4.2. Can be used when there are comprehensive solution architecture artifacts.

4.2.7.4.3. This practice is not recommended to typical EPAM solutions. At EPAM we usually don't have such big projects that require ATAM. Most likely, you should use LAEM instead of ATAM.

4.2.8. Lightweight Architecture Evaluation Method (LAEM)

4.2.8.1. Comment

4.2.8.1.1. Lightweight Architecture Evaluation method can be used to evaluate architecture of smaller projects and get list of recommendations.

4.2.8.2. Specifics

4.2.8.2.1. This practice is light-weight, time-efficient, cost-effective and can be done in a day or so.

4.2.8.2.2. This practice does not provide final report, though it provides evaluation results description.

4.2.8.3. Alternative practices

4.2.8.3.1. ATAM;

4.2.8.3.2. EPAM Architecture Review Process.

4.2.8.4. Recommendations

4.2.8.4.1. This practice can be recommended for smaller and less risky projects where customer need fast architecture health check and recommendations for addressing.

4.2.8.4.2. It may be used as an alternative to ATAM if there are time constraints. So, in EPAM reality it can be used more frequently. Though it requires the stakeholders to be gathered at the same time in a one place but for shorter period of time than ATAM

5. Architecture CheckList

5.1. Authentication and Authorization

5.1.1. How to store user identities.

5.1.2. How to authenticate callers.

5.1.3. How to authorize callers.

5.1.4. How to flow identity across layers and tiers.

5.2. Caching and State

5.2.1. How to choose effective caching strategies.

5.2.2. How to improve performance with caching.

5.2.3. How to improve security with caching.

5.2.4. How to improve availability with caching.

5.2.5. How to keep the cached data up to date.

5.2.6. How to determine when and why to use a custom cache.

5.2.7. How to determine what data to cache.

5.2.8. How to determine where to cache the data.

5.2.9. How to determine the expiration policy and scavenging mechanism.

5.2.10. How to load the cache data.

5.2.11. How to monitor a cache.

5.2.12. How to synchronize caches across a farm.

5.2.13. How to determine which caching technique provides the best performance and scalability for a specific scenario and configuration.

5.2.14. How to determine which caching technology complies with the application's requirements for security, monitoring, and management

5.3. Communication

5.3.1. How to communicate between layers / tiers.

5.3.2. How to perform asynchronous communication.

5.3.3. How to pass sensitive data.

5.4. Composition

5.4.1. How do design for composition.

5.4.2. How to design loose coupling between modules.

5.4.3. How to handle dependencies in a loosely coupled way.

5.5. Concurrency and Transactions

5.5.1. How to handle concurrency between threads.

5.5.2. How to choose between optimistic and pessimistic concurrency.

5.5.3. How to handle distributed transactions.

5.5.4. How to handle long running transactions.

5.5.5. How to determine appropriate transaction isolation levels.

5.5.6. How to determine when compensating transactions are required.

5.6. Configuration Management

5.6.1. How to determine which information needs to be configurable.

5.6.2. How to determine where and how to store configuration information.

5.6.3. How to handle sensitive information.

5.6.4. How to handle configuration information in a farm/cluster.

5.7. Coupling and Cohesion

5.7.1. How to separate concerns

5.7.2. How to structure the application.

5.7.3. How to choose an appropriate layering strategy.

5.7.4. How to establish boundaries.

5.8. Data Access

5.8.1. How to manage database connections.

5.8.2. How to handle exceptions.

5.8.3. How to improve performance.

5.8.4. How to improve manageability.

5.8.5. How to handle blobs.

5.8.6. How to page records.

5.8.7. How to perform transactions.

5.9. Exception Management

5.9.1. How to handle exceptions.

5.9.2. How to log exceptions.

5.10. Logging and Instrumentation

5.10.1. How to determine which information to log.

5.10.2. How to make the logging configurable

5.11. User Experience

5.11.1. ow to improve task efficiency and effectiveness.

5.11.2. How to improve responsiveness.

5.11.3. How to improve user empowerment.

5.11.4. How to improve look and feel.

5.12. Validation

5.12.1. How to determine where and how to perform validation.

5.12.2. How to validate for length, range, format, and type.

5.12.3. How to constrain and reject input.

5.12.4. How to sanitize output.

5.13. Workflow

5.13.1. How to handle concurrency issues within a workflow

5.13.2. How to handle task failure within a workflow

5.13.3. How to orchestrate processes within a workflow