Kĩ thuật lập trình - Chapter 11: Security and dependability

Availability The probability that the system will be up and running and able to deliver useful services to users. Reliability The probability that the system will correctly deliver services as expected by users. Safety A judgment of how likely it is that the system will cause damage to people or its environment. Security A judgment of how likely it is that the system can resist accidental or deliberate intrusions.

pptx46 trang | Chia sẻ: huyhoang44 | Lượt xem: 535 | Lượt tải: 0download
Bạn đang xem trước 20 trang tài liệu Kĩ thuật lập trình - Chapter 11: Security and dependability, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
Chapter 11 – Security and DependabilityLecture 11Chapter 11 Security and DependabilityTopics coveredDependability propertiesThe system attributes that lead to dependability.Availability and reliabilitySystems should be available to deliver service and perform as expected.SafetySystems should not behave in an unsafe way. SecuritySystems should protect themselves and their data from external interference.System dependabilityFor many computer-based systems, the most important system property is the dependability of the system.The dependability of a system reflects the user’s degree of trust in that system. It reflects the extent of the user’s confidence that it will operate as users expect and that it will not ‘fail’ in normal use.Dependability covers the related systems attributes of reliability, availability and security. These are all inter-dependent.3Chapter 11 Security and DependabilityImportance of dependabilitySystem failures may have widespread effects with large numbers of people affected by the failure.Systems that are not dependable and are unreliable, unsafe or insecure may be rejected by their users.The costs of system failure may be very high if the failure leads to economic losses or physical damage.Undependable systems may cause information loss with a high consequent recovery cost.4Chapter 11 Security and DependabilityCauses of failureHardware failureHardware fails because of design and manufacturing errors or because components have reached the end of their natural life.Software failureSoftware fails due to errors in its specification, design or implementation.Operational failureHuman operators make mistakes. Now perhaps the largest single cause of system failures in socio-technical systems.5Chapter 11 Security and DependabilityPrincipal dependability properties 6Chapter 11 Security and DependabilityPrincipal propertiesAvailabilityThe probability that the system will be up and running and able to deliver useful services to users.ReliabilityThe probability that the system will correctly deliver services as expected by users.SafetyA judgment of how likely it is that the system will cause damage to people or its environment.SecurityA judgment of how likely it is that the system can resist accidental or deliberate intrusions.7Chapter 11 Security and DependabilityOther dependability propertiesRepairabilityReflects the extent to which the system can be repaired in the event of a failureMaintainabilityReflects the extent to which the system can be adapted to new requirements;SurvivabilityReflects the extent to which the system can deliver services whilst under hostile attack;Error toleranceReflects the extent to which user input errors can be avoided and tolerated.8Chapter 11 Security and DependabilityRepairabilityThe disruption caused by system failure can be minimized if the system can be repaired quickly.This requires problem diagnosis, access to the failed component(s) and making changes to fix the problems.Repairability is a judgment of how easy it is to repair the software to correct the faults that led to a system failure.Repairability is affected by the operating environment so is hard to assess before system deployment.Chapter 11 Security and Dependability9MaintainabilityA system attribute that is concerned with the ease of repairing the system after a failure has been discovered or changing the system to include new features.Repairability – short-term perspective to get the system back into service; Maintainability – long-term perspective.Very important for critical systems as faults are often introduced into a system because of maintenance problems. If a system is maintainable, there is a lower probability that these faults will be introduced or undetected.10Chapter 11 Security and DependabilitySurvivabilityThe ability of a system to continue to deliver its services to users in the face of deliberate or accidental attackThis is an increasingly important attribute for distributed systems whose security can be compromisedSurvivability subsumes the notion of resilience - the ability of a system to continue in operation in spite of component failures 11Chapter 11 Security and DependabilityError tolerancePart of a more general usability property and reflects the extent to which user errors are avoided, detected or tolerated.User errors should, as far as possible, be detected and corrected automatically and should not be passed on to the system and cause failures.Chapter 11 Security and Dependability12Dependability attribute dependenciesSafe system operation depends on the system being available and operating reliably.A system may be unreliable because its data has been corrupted by an external attack.Denial of service attacks on a system are intended to make it unavailable.If a system is infected with a virus, you cannot be confident in its reliability or safety.Chapter 11 Security and Dependability13Dependability achievementAvoid the introduction of accidental errors when developing the system.Design V & V processes that are effective in discovering residual errors in the system.Design protection mechanisms that guard against external attacks.Configure the system correctly for its operating environment.Include recovery mechanisms to help restore normal system service after a failure.Chapter 11 Security and Dependability14Dependability costsDependability costs tend to increase exponentially as increasing levels of dependability are required.There are two reasons for thisThe use of more expensive development techniques and hardware that are required to achieve the higher levels of dependability.The increased testing and system validation that is required to convince the system client and regulators that the required levels of dependability have been achieved.15Chapter 11 Security and DependabilityCost/dependability curve 16Chapter 11 Security and DependabilityDependability economicsBecause of very high costs of dependability achievement, it may be more cost effective to accept untrustworthy systems and pay for failure costsHowever, this depends on social and political factors. A reputation for products that can’t be trusted may lose future businessDepends on system type - for business systems in particular, modest levels of dependability may be adequate17Chapter 11 Security and DependabilityAvailability and reliabilityReliabilityThe probability of failure-free system operation over a specified time in a given environment for a given purposeAvailabilityThe probability that a system, at a point in time, will be operational and able to deliver the requested servicesBoth of these attributes can be expressed quantitatively e.g. availability of 0.999 means that the system is up and running for 99.9% of the time. 18Chapter 11 Security and DependabilityAvailability and reliabilityIt is sometimes possible to subsume system availability under system reliabilityObviously if a system is unavailable it is not delivering the specified system services.However, it is possible to have systems with low reliability that must be available. So long as system failures can be repaired quickly and does not damage data, some system failures may not be a problem.Availability is therefore best considered as a separate attribute reflecting whether or not the system can deliver its services.Availability takes repair time into account, if the system has to be taken out of service to repair faults. 19Chapter 11 Security and DependabilityPerceptions of reliabilityThe formal definition of reliability does not always reflect the user’s perception of a system’s reliabilityThe assumptions that are made about the environment where a system will be used may be incorrectUsage of a system in an office environment is likely to be quite different from usage of the same system in a university environmentThe consequences of system failures affects the perception of reliabilityUnreliable windscreen wipers in a car may be irrelevant in a dry climateFailures that have serious consequences (such as an engine breakdown in a car) are given greater weight by users than failures that are inconvenient20Chapter 11 Security and DependabilityReliability and specificationsReliability can only be defined formally with respect to a system specification i.e. a failure is a deviation from a specification.However, many specifications are incomplete or incorrect – hence, a system that conforms to its specification may ‘fail’ from the perspective of system users.Furthermore, users don’t read specifications so don’t know how the system is supposed to behave.Therefore perceived reliability is more important in practice.Chapter 11 Security and Dependability21Availability perceptionAvailability is usually expressed as a percentage of the time that the system is available to deliver services e.g. 99.95%.However, this does not take into account two factors:The number of users affected by the service outage. Loss of service in the middle of the night is less important for many systems than loss of service during peak usage periods.The length of the outage. The longer the outage, the more the disruption. Several short outages are less likely to be disruptive than 1 long outage. Long repair times are a particular problem.Chapter 11 Security and Dependability22Key pointsThe dependability in a system reflects the user’s trust in that system.Dependability is a term used to describe a set of related ‘non-functional’ system attributes – availability, reliability, safety and security.The availability of a system is the probability that it will be available to deliver services when requested.The reliability of a system is the probability that system services will be delivered as specified.23Chapter 11 Security and DependabilityChapter 11 – Security and DependabilityLecture 224Chapter 11 Security and DependabilityReliability terminology TermDescriptionHuman error ormistakeHuman behavior that results in the introduction of faults into a system. For example, in the wilderness weather system, a programmer might decide that the way to compute the time for the next transmission is to add 1 hour to the current time. This works except when the transmission time is between 23.00 and midnight (midnight is 00.00 in the 24-hour clock).System faultA characteristic of a software system that can lead to a system error. The fault is the inclusion of the code to add 1 hour to the time of the last transmission, without a check if the time is greater than or equal to 23.00.System errorAn erroneous system state that can lead to system behavior that is unexpected by system users. The value of transmission time is set incorrectly (to 24.XX rather than 00.XX) when the faulty code is executed.System failureAn event that occurs at some point in time when the system does not deliver a service as expected by its users. No weather data is transmitted because the time is invalid.25Chapter 11 Security and DependabilityFaults and failuresFailures are a usually a result of system errors that are derived from faults in the systemHowever, faults do not necessarily result in system errorsThe erroneous system state resulting from the fault may be transient and ‘corrected’ before an error arises.The faulty code may never be executed.Errors do not necessarily lead to system failuresThe error can be corrected by built-in error detection and recovery The failure can be protected against by built-in protection facilities. These may, for example, protect system resources from system errors26Chapter 11 Security and DependabilityA system as an input/output mapping 27Chapter 11 Security and DependabilitySoftware usage patterns 28Chapter 11 Security and DependabilityReliability in useRemoving X% of the faults in a system will not necessarily improve the reliability by X%. A study at IBM showed that removing 60% of product defects resulted in a 3% improvement in reliability.Program defects may be in rarely executed sections of the code so may never be encountered by users. Removing these does not affect the perceived reliability.Users adapt their behaviour to avoid system features that may fail for them.A program with known faults may therefore still be perceived as reliable by its users.29Chapter 11 Security and DependabilityReliability achievementFault avoidanceDevelopment technique are used that either minimise the possibility of mistakes or trap mistakes before they result in the introduction of system faults.Fault detection and removalVerification and validation techniques that increase the probability of detecting and correcting errors before the system goes into service are used.Fault toleranceRun-time techniques are used to ensure that system faults do not result in system errors and/or that system errors do not lead to system failures.30Chapter 11 Security and DependabilitySafetySafety is a property of a system that reflects the system’s ability to operate, normally or abnormally, without danger of causing human injury or death and without damage to the system’s environment.It is important to consider software safety as most devices whose failure is critical now incorporate software-based control systems. Safety requirements are often exclusive requirements i.e. they exclude undesirable situations rather than specify required system services. These generate functional safety requirements.31Chapter 11 Security and DependabilitySafety criticalityPrimary safety-critical systemsEmbedded software systems whose failure can cause the associated hardware to fail and directly threaten people. Example is the insulin pump control system.Secondary safety-critical systemsSystems whose failure results in faults in other (socio-technical)systems, which can then have safety consequences. For example, the MHC-PMS is safety-critical as failure may lead to inappropriate treatment being prescribed.32Chapter 11 Security and DependabilitySafety and reliabilitySafety and reliability are related but distinctIn general, reliability and availability are necessary but not sufficient conditions for system safety Reliability is concerned with conformance to a given specification and delivery of serviceSafety is concerned with ensuring system cannot cause damage irrespective of whether or not it conforms to its specification33Chapter 11 Security and DependabilityUnsafe reliable systemsThere may be dormant faults in a system that are undetected for many years and only rarely arise.Specification errorsIf the system specification is incorrect then the system can behave as specified but still cause an accident.Hardware failures generating spurious inputsHard to anticipate in the specification.Context-sensitive commands i.e. issuing the right command at the wrong timeOften the result of operator error.34Chapter 11 Security and DependabilitySafety terminology TermDefinitionAccident (or mishap)An unplanned event or sequence of events which results in human death or injury, damage to property, or to the environment. An overdose of insulin is an example of an accident.HazardA condition with the potential for causing or contributing to an accident. A failure of the sensor that measures blood glucose is an example of a hazard.DamageA measure of the loss resulting from a mishap. Damage can range from many people being killed as a result of an accident to minor injury or property damage. Damage resulting from an overdose of insulin could be serious injury or the death of the user of the insulin pump.Hazard severityAn assessment of the worst possible damage that could result from a particular hazard. Hazard severity can range from catastrophic, where many people are killed, to minor, where only minor damage results. When an individual death is a possibility, a reasonable assessment of hazard severity is ‘very high’.Hazard probabilityThe probability of the events occurring which create a hazard. Probability values tend to be arbitrary but range from ‘probable’ (say 1/100 chance of a hazard occurring) to ‘implausible’ (no conceivable situations are likely in which the hazard could occur). The probability of a sensor failure in the insulin pump that results in an overdose is probably low.RiskThis is a measure of the probability that the system will cause an accident. The risk is assessed by considering the hazard probability, the hazard severity, and the probability that the hazard will lead to an accident. The risk of an insulin overdose is probably medium to low.35Chapter 11 Security and DependabilitySafety achievementHazard avoidanceThe system is designed so that some classes of hazard simply cannot arise. Hazard detection and removalThe system is designed so that hazards are detected and removed before they result in an accident.Damage limitationThe system includes protection features that minimise the damage that may result from an accident.36Chapter 11 Security and DependabilityNormal accidentsAccidents in complex systems rarely have a single cause as these systems are designed to be resilient to a single point of failureDesigning systems so that a single point of failure does not cause an accident is a fundamental principle of safe systems design.Almost all accidents are a result of combinations of malfunctions rather than single failures.It is probably the case that anticipating all problem combinations, especially, in software controlled systems is impossible so achieving complete safety is impossible. Accidents are inevitable.37Chapter 11 Security and DependabilitySoftware safety benefitsAlthough software failures can be safety-critical, the use of software control systems contributes to increased system safetySoftware monitoring and control allows a wider range of conditions to be monitored and controlled than is possible using electro-mechanical safety systems.Software control allows safety strategies to be adopted that reduce the amount of time people spend in hazardous environments.Software can detect and correct safety-critical operator errors.Chapter 11 Security and Dependability38SecurityThe security of a system is a system property that reflects the system’s ability to protect itself from accidental or deliberate external attack.Security is essential as most systems are networked so that external access to the system through the Internet is possible.Security is an essential pre-requisite for availability, reliability and safety.39Chapter 11 Security and DependabilityFundamental securityIf a system is a networked system and is insecure then statements about its reliability and its safety are unreliable.These statements depend on the executing system and the developed system being the same. However, intrusion can change the executing system and/or its data.Therefore, the reliability and safety assurance is no longer valid.40Chapter 11 Security and DependabilitySecurity terminology TermDefinitionAssetSomething of value which has to be protected. The asset may be the software system itself or data used by that system.ExposurePossible loss or harm to a computing system. This can be loss or damage to data, or can be a loss of time and effort if recovery is necessary after a security breach.VulnerabilityA weakness in a computer-based system that may be exploited to cause loss or harm.AttackAn exploitation of a system’s vulnerability. Generally, this is from outside the system and is a deliberate attempt to cause some damage.ThreatsCircumstances that have potential to cause loss or harm. You can think of these as a system vulnerability that is subjected to an attack.ControlA protective measure that reduces a system’s vulnerability. Encryption is an example of a control that reduces a vulnerability of a weak access control system41Chapter 11 Security and DependabilityExamples of security terminology (MHC-PMS) TermExampleAssetThe records of each patient that is receiving or has received treatment.ExposurePotential financial loss from future patients who do not seek treatment because they do not trust the clinic to maintain their data. Financial loss from legal action by the sports star. Loss of reputation.VulnerabilityA weak password system which makes it easy for users to set guessable passwords. User ids that are the same as names.AttackAn impersonation of an authorized user.ThreatAn unauthorized user will gain access to the system by guessing the credentials (login name and password) of an authorized user.ControlA password checking system that disallows user passwords that are proper names or words that are normally included in a dictionary.42Chapter 11 Security and DependabilityThreat classesThreats to the confidentiality of the system and its dataCan disclose information to people or programs that do not have authorization to access that information.Threats to the integrity of the system and its dataCan damage or corrupt the software or its data.Threats to the availability of the system and its dataCan restrict access to the system and data for authorized users.Chapter 11 Security and Dependability43Damage from insecurityDenial of serviceThe system is forced into a state where normal services are unavailable or where service provision is significantly degradedCorruption of programs or dataThe programs or data in the system may be modified in an unauthorised wayDisclosure of confidential informationInformation that is managed by the system may be exposed to people who are not authorised to read or use that information44Chapter 11 Security and DependabilitySecurity assuranceVulnerability avoidanceThe system is designed so that vulnerabilities do not occur. For example, if there is no external network connection then external attack is impossibleAttack detection and eliminationThe system is designed so that attacks on vulnerabilities are detected and neutralised before they result in an exposure. For example, virus checkers find and remove viruses before they infect a systemExposure limitation and recoveryThe system is designed so that the adverse consequences of a successful attack are minimised. For example, a backup policy allows damaged information to be restored45Chapter 11 Security and DependabilityKey pointsReliability is related to the probability of an error occurring in operational use. A system with known faults may be reliable.Safety is a system attribute that reflects the system’s ability to operate without threatening people or the environment.Security is a system attribute that reflects the system’s ability to protect itself from external attack.Dependability is compromised if a system is insecure as the code or data may be corrupted.46Chapter 11 Security and Dependability

Các file đính kèm theo tài liệu này:

  • pptxch11_1529.pptx
Tài liệu liên quan