SECURE ELECTRONIC COMMERCE ... - Information Systems

5 downloads 18471 Views 204KB Size Report
M is the merchant of a digital product as the protocol portrays. From the first ... burdensome and did not have buy-in from the necessary parties (Peters, 2002). Many ..... digital signature protocols, ISCIS œ XII, Antalya, Turkiye, 1997. Lowe, G.: ...
SECURE ELECTRONIC COMMERCE AUTHENTICATION PROTOCOLS IN ECONOMICALLY DEPRIVED COMMUNITIES H. A. Eneh, O. Gemikonakli, and R. Comley School of Computing Science, Middlesex University, the Borroughs, London, NW4 4BT Email: [email protected] Abstract The development of authentication protocols has progressed from the notoriously error prone informal methods to formal methods. Formal methods include those of belief logics, model checking, and inductive proofs. This paper emphasizes the need for a more rigorous approach towards the analysis and verification of electronic commerce protocols especially, those to be deployed within environments where there is high propensity towards fraud. The Communicating Sequential Processes model of the principals in a simple network in such environments is presented. The model provides a means for reasoning about the strength of such authentication protocols that can minimize the success of attacks against e-commerce authentication protocols. There are several publications on the use of CSP to model processes and events but none has hitherto rendered a concise description of attackers with a significant level of both deductive and inductive reasoning. 1. Introduction Electronic commerce protocols encompass more operations than traditional protocols, which are predefined and agreed sequences of communication and computation steps followed by communicating principals. The paradigm of authentication protocols is one of such steps primarily followed for the purposes of verifying, reliably, the claimed identities of the communicating principals. Authentication or otherwise cryptographic protocols have been axiomatic in literature as the reliable means of identity verification (Clark and Jacob, 1997) (Gollman, 1996). The case for electronic commerce authentication protocols is to reliably verify the identities of the principals involved in electronic commerce transactions or payment schemes. Sequel to this responsibility, electronic commerce protocols are dubbed to ensure that the transactions are unambiguous and coherent (Bolignano, 1997). Other desirable roles are those that pertain to certain atomicity properties of e-commerce protocols such as money atomicity, goods atomicity, and validated receipt (Heintze et al, 1996), (Ray and Ray, 2000). The exponential growth of electronic commerce necessitates the spontaneous growth of the associated protocols. Security, or lack of it, is the main concern of electronic transactions as the parties involved have no physical contact and must therefore rely on the integrity of the interconnecting devices and communication channel for their welfare and those of their transactions. The emphasis of this paper is on authentication. Authentication is chiefly addressed for the purposes of prevention of attacks on networked systems resources. Thus, the popular preventive measures incorporate authentication as well as access control (Stallings, 2002) (Kizza, 2002). Also the other

requirements of computer network security include, but not limited to, those of confidentiality, integrity, availability, etc. However, the result of authentication serves as input for all other security provisions such as access control and thus, authentication issues deserve critical appraisal in the context of preventing attacks against electronic commerce transactions by way of the associated protocols. Environments where attackers, criminals in real terms, derive motivation from the consequences of economic deprivation have been used here to describe various scenarios. It has been stated that poverty, unemployment, and income inequality have all consistently been found to render areas crime-prone (NSW Bureau of Crime and Statistics and Research, 2001)(Carlie, 2005). This explains the reason why such countries like Nigeria, Indonesia, etc. have been registering high incidences of fraud related crimes (Eneh and Gemikonakli, 2005b). Within these environments it is very difficult, if not impossible, to establish a safe location where trusted parties such as authentication or ticket granting servers for the purposes of reliable authentication can be housed. The physical security of the locations of such servers cannot be guaranteed due to ease of bribery and other forms of social engineering, all of which make it easy for adversaries to access and possibly take over targeted locations. It therefore becomes necessary to adopt procedures that can raise the security provisions of authentication protocols through engineering and analysis in such a manner as to diminish the reliance on trusted parties within highly vulnerable environments. This concern is, in the final analysis, coupled with those of the trusted third party (Levi and Caglayan, 1997). The significance of a functional e-commerce framework cannot be overemphasized. The analysis of authentication protocols, the means of ascertaining that the desired qualities hold, has progressed from the error-prone informal methods to the widely acknowledged formal methods. Model checking is developing a trustable reputation of revealing subtle flaws in authentication protocols. This is confirmed by various observations (Rubin and Honeyman, 1993)(Schneider, 1996), (Lowe, 1997), (Meadows, 2000), (Hoare, 2004). This paper discusses the modelling of protocol participants including the adversary, attacker, in a manner to capture the full potentials, deductively and inductively, of attackers in environments with high propensity to attacks. The Communicating Sequential Processes (CSP) approach is adopted for the models. CSP is effective process algebra, a mathematical framework, for the description of interacting components. CSP has been used in conjunction with the model checker Failure Divergence Refinement (FDR) with significant results (Lowe, 1997), (Ryan and Schneider, 2001). The remainder of this paper is organized as follows; background to the problem, related work, method of analysis, modelling of protocol participants, further work, and conclusion. 2. Background of the problem The strength of authentication protocols lie in the extent of verifications used to ensure that the protocols offer what they claim and most importantly, to determine the extent and nature of the vulnerabilities that the protocols may contain. These vulnerabilities could be associated to design errors, unrealistic assumptions, or the pattern of information exchanges (Abadi and Needham, 1994) and (Gollman, 1996). The vulnerabilities aid in

attacks such as the popular masquerade, session hijacking, denial of service, or other man-in-the-middle attacks. It suffices to use an example protocol (Ray and Ray, 2000) to highlight the entities or principals that engage in an electronic commerce protocol in order to explain the need for authentication in such a protocol. Consider the protocol below; 1. TP → C : encrypted digital product 2. C → M : purchase order 3. M → C : product encrypted with a certain key Kp 4. M → TP : the decrypting key K-1 p 5. C → TP : payment token and purchase order 6. TP → C : the decrypting key K-1 p 7. TP → M : payment token Figure1: A Sample ecommerce protocol From the above protocol, TP is a trusted party (trusted third party), C is the customer, and M is the merchant of a digital product as the protocol portrays. From the first step, there is no guarantee that the TP is not an intruder or attacker who is attempting to obtain a purchase order and therefore, a payment token from an unsuspecting customer. The same level of mistrust can easily be extended to all the principals in turn. The protocol rendered here is postulated on the basis of the theory of cross validation. This theory states that the encrypted messages compare if and only if the unencrypted messages compare (Ray and Ray, 2000). A critical look at the proposition of this theory reveals a certain confidence for an attacker with reasonable computation and communication abilities as the provisions of the theory can easily be exploited in launching successful attacks based on knowledge. One of the initial solutions for reducing credit card fraud was Secure Electronic Transaction (SET). The reason for the failure of SET is that SET proved to be burdensome and did not have buy-in from the necessary parties (Peters, 2002). Many sellers and several electronic cash companies have been crippled due to excessive credit card charge-backs (Peters, 2002). Charge-backs occur when a customer refutes a transaction. Such scenarios include instances where a customer or a card holder claims he did not take part in a transaction, did not receive the goods, or believed the goods were not as represented by the merchant. An earlier security solution known as Secure Socket Layer (SSL) emerged prior to SET. However, almost all SSL servers use software-based private keys stored on disk with a hard-coded key in a stash file to permit unattended server restart. Any hacker who manages to access these files can easily determine the key (Peters, 2002). This is particularly disturbing giving the high percentage of insider attacks claimed in numerous trades press articles. A hacker with SSL private key who is capable of eavesdropping on SSL sessions could decrypt credit card information. Furthermore, a hacker who is capable of changing the DNS listing of the server could successfully impersonate the server to obtain credit card information.

3. Related work The work by (Schneider, 1997) provides the background for reasoning about CSP models of principals in a communication network in the presence of an intruder. Furthermore, theorems that explain the derivatives of the intruder knowledge and the information passing through communication channels were rendered. Such channels include those for transmission (trans) and those for reception (rec). The different flavours of authentication are also defined and the exact nature of authentication guarantees based on contents of authentication messages was given. However, no attempt was made to describe an intruder with enhanced capabilities such as those of being able to make specific inductions from the knowledge of certain items of information such as keys and IDs of principals. A presumption of a trusted gateway was made in (Bolignano, 1997). In consonance with this work, electronic commerce protocols have to address standard requirements such as confidentiality, integrity of data, cardholder account authentication, and merchants’ authentication while authentication of the payment gateway is relegated to the background. Besides the responsibility of ensuring that transactions between parties are unambiguous and coherent, the gateway should be able to identify the respective responsibilities played by each communicating party and subsequently be able to represent or prove these responsibilities to other interest parties such as a court of law. Nevertheless, because (Bolignano, 1997) assumed trusted gateways and used a modal method of analysis based on state exploration, part of the shortfalls associated with the approach is that of the state explosion problem, which is common with state-based techniques (Meadows, 2000). According to (Ray and Ray, 2000), model checking in analysis of ecommerce protocols is a reliable option for ascertaining that the desirable properties of protocols hold. Analysis thereof pertains to confirming if such properties as money atomicity, goods atomicity, and validated receipt hold or not. In as much as (Ray and Ray, 2000) demonstrated the use of FDR and CSP in the verification of protocols, their work is very silent about issues of security and authentication in particular. Within their work, a solution to ensure that the protocol, shown above in figure 1, becomes failure resilient was introduced. However, such a solution further heightens the requirement of authentication as it entails the storage of sensitive transaction information within the hosts until, according to the paper; the information is no longer required. Significantly, the paper opens up avenues to reason about security and the more important task of verification of the identities of the communicating principals prior to transmission of actual transaction or financial data. 4. Method of analysis The work described in (Eneh and Gemikonakli, 2005c) presented an inference rule to be used in a finite proof system for the analysis of authentication protocols in general. The inference rule is part of a verifier namely, pre-emptive protocol verifier, for the verification and therefore, the design of attack-free and resilient authentication protocols. It is sublimely clear that the design of authentication protocols when strictly informal is

error prone, but when strictly automatic, formal, becomes subject to the internal state of the underlying automaton. By implication, when an authentication protocols’ design and analysis are functions of dedicated systems or programs, the protocols’ correctness (effectiveness and efficiency) becomes a mere function of internal states or condition. In line with this belief, the work described in (Eneh and Gemikonakli, 2005b) rendered a means to combine informal, or manual, and formal methods of verification in a hybrid system towards the verification of authentication protocols. This approach is to guarantee that the resulting protocols are pre-exposed to more forms of attacks, manual and automatic, prior to and within production use. Besides the stated provision, a design method that provides for continuity in the protocol verification has been rendered in (Eneh and Gemikonakli, 2005a), which described a cyclic analytical method for protocol analysis and design. This work laid out the entire phases for analysis and design of protocols in a cyclic framework. The phases are enshrined in a cyclic pattern to depict continuity and that the output of one phase is input to the subsequent phase. The phases are repeated whenever flaws are discovered by the automatic system itself or externally, changes in industry standard specifications or attacks discovered by manual means. For brevity, the phases include specification as the initial phase, refinement, which deals with fine-tuning of the specification to remove ambiguities such as granting and denying a transaction in a complete run of a protocol, design, or otherwise the design with specification, deals with the actual design or adoption, as the case may be, of the protocol that matches the specification, protocol specification analysis is the core phase of our analytical method and deals with all forms of automatic or manual discovery of attacks against the designed protocol for the authentication mechanism to be deployed. The remaining phases are the implementation and implementation verification that deal with the automatic code generation and confirmation respectively. The core or what may be called the kernel, of the design method is the protocol specification analysis phase. This phase houses the Pre-emptive Protocol Verifier (PPV) (Eneh and Gemikonakli, 2005a and 2005b). PPV verifies protocols by modelling all instances of attacks that can be launched against a protocol by initially identifying all roles involved in a protocol. The roles primarily include transmitting, receiving, and encrypting/decrypting messages. The PPV builds logic to decide the extent of information that could be available to an attacker, and finally proceeds to verification by considering the possible effects the roles can have with available information. It uses an inference rule of the form [X|Y]π ⇒ M. This inference rule has the implication that whenever the condition π is true, any principal, intruder included, who supplies the set of information x ∈ X or y ∈ Y to a protocol obtains the message m ∈ M; X represents the set of legitimate messages or set of possible deductions while Y, the set of malicious messages or set of possible inductions. By also considering the use of nonces, unique identifiers, as a parameter for condition π as well as freshness condition, this approach slightly differs from those of (Debbabi et al 1997). Besides, coverage is extended to more protocol types due to the nature of the inference rule. In this section we present the modelling of principal components, attacker and participants, in a typical electronic commerce protocol. In order to start developing the model there is need to adopt a grammar for specification of components of the model. Within the hybrid system described in (Eneh and Gemikonakli, 2005b) and also depicted

in figure 3, there exists a protocol synthesizer, which uses the same grammar as the protocol generator of (Perrig and Song, 1998) and shows consistency with the description of message space adopted in (Schneider, 1997). The grammar used here has the following message representation implications: Message ::= (Principal|Nonce|Key) | Encrypted | Concatenated Encrypted ::= (Message, Key) Key ::= Public | Private | Shared Concatenated ::= Message List Message List ::= Message | Message, Message List Figure 2: Message representation format.

Protocol Specification

System Requirements

Protocol Specification Synthesiser

Yes

PPV

Premptive Protocol

No

Manual Attack

Standard Changes or New attacks

Figure 3: Hybrid mechanism for the analysis of protocols As depicted in figure 1, protocol specification and systems requirements are the items of information that are passed to the protocol synthesiser. The protocol specification is the security property in a specification language, which is mainly strong authentication, systems requirement is a way of keeping the system overview in perspective during the

initial stages of design and to ensure that the system requirements such as hardware, operating system, and other paraphernalia are met. Then the protocol synthesiser uses the grammar with the message representation format presented in figure 2 to express and model the protocol before proceeding to the verification phase. The verification phase is a combo; combining automatic verification using the PPV and supporting manual verification as shown in the figure. After the verification, the resulting protocol shown as pre-emptive protocol can be released for production use. However, if an attack is discovered by any means, automatic or manual, or there exists industry variation in standards, certain considerations have to be made about the fate of the protocol and the entire system. Such considerations include limiting or complete denial of access to sensitive resources, redeveloping a fresh or enhanced specification and passing same on to the protocol synthesiser for a fresh round of development and verification. However, if nothing critical has been discovered, the PPV is revisited in case the manual intervention has introduced any bugs. 5. Modelling of protocol participants The participants in the protocol include the communicating principals say, A and B, who are the legitimate part-takers, and the adversary say, R, who is the attacker. In a typical electronic commerce protocol, there exists the trusted third party who provides arbitration service between the communicating principals say, A for customers and B for merchants. The language of CSP is used to present the models. To model the protocol participants, the same model is used for the legitimate principals A, B, and the trusted party. Though for a specific protocol run, one party plays the role of the protocol initiator while the other party plays the responder role. Whereas the attacker is modelled to depict attacker potentials, over and above the legitimate principals, which according to a scenario of economic deprivation, is higher than that of the Dolev-Yao model and earlier models that ordinarily comply with it (Dolev and Yao, 1983) (Schneider, 1997). This special consideration for an attacker is such as to be able to pre-empt attacks in such environments where there is inundating enthusiasm or propensity towards attacking system resources, secured databases, for survival reasons. In this scenario, the attacker not only has the capacity to intercept messages in all directions, modify messages, inject new messages, and transmit messages, but also possesses sufficient communication and computation capabilities and enjoys the liberty of time and space. Attacker model To present the model of the attacker in CSP, initial steps involve determining the extent of information that could be available to an attacker with the aforementioned potentials. It is safe to use the inference rule introduced in (Eneh and Gemikonakli, 2005c) and the illustration thereof. Using the inference rule to analyse a typical Kerberos protocol in the presence of TGS (Ticket Granting Server) reveals that the protocol is subject to a TGS masquerade attack. The network is modelled to depict the presence of an attacker at all times, the network thus, corresponds to description given in (Schneider, 1997) and described as follows:

NET = (USERA ||| USERB) | [trans, rec] | ATTACKER The above description implies that the users A and B inadvertently communicate with the attacker through transmission and reception channels represented respectively as trans and rec. A clear model of this sort of network is introduced in (Schneider, 1997), which also presented valid theorems. The first theorem rendered the description of an attacker as: ATTACKER sat (INIT ∪ (tr ⇓ trans)) ├ tr ⇓ rec This theorem is used here to explain that the sets of all the messages that pass through the rec channel is a function of the initial knowledge of the attacker and the sets of the messages input on the trans channel. As the inference rule used in (Eneh and Gemikonakli, 2005c) considers both possible deductions and inductions, i.e. X and Y respectively from [X|Y]π ⇒ M, the attacker description presented above, and which is of the same nature with the one presented in (Schneider, 1997), corresponds to the case for X, possible deductions. To obtain the case for possible inductions Y, some theorem shall be considered. These theorems, used in (Schneider, 1997), present the conditions which when satisfied reveal the deductions made by the attacker. Where the attacker is designated as I, and the users as S, this theorem is such that when the attacker is a predicate on messages m, the following conditions apply; 1. ∀ m ∈ INIT.I(m) 2. (∀ m' ∈ S.I(m')) ∧ S ├ m ⇒ I(m) These two conditions imply that if the attacker ever knows the messages that satisfy the predicate I, then the attacker is able to generate the messages that satisfy I. This is proven inductively by considering the ‘generates’ relation denoted by ├. The consideration of certain clauses will help to explain the proof further. Such clauses are as rendered in (Schneider, 1997) where an instance says that M1 : S ├ m ∧ S ├ K ⇒ S ├ K(m). Then with the possibility of message derivative m' being an occurrence in the intruder event such that ∀ m' ∈ S.I(m') by induction, we therefore obtain I(m) and I(K). In order to relate the above theorem to the proposed inference rule in (Eneh and Gemikonakli, 2005c), let us recall the demonstration of the inference with the Kerberos protocol in the presence of a TGS (ticket granting server). Refer to (Stallings, 2002) for a detailed description of the Kerberos protocol in the presence of TGS. Step 1 being C → AS : IDc ║IDtgs ║Nonce1. The inference rule that applies to this step is of the form “ ” ⇒ IDc ,IDtgs , Nonce1, where “ ” denotes an empty set. This implies that an intruder who intercepts the communication between a legitimate client who initiates

the protocol and an authentication server is able to obtain the identities of the client and that of the server as well as the nonce generated by the client. The second step, AS → C : IDc, Tickettgs, Ekc{Kc,tgs,Nonce1,IDtgs} where Ekc{} is an encryption operation using the public key of C. This implies that an intruder who listened to the first step and transmitted the message in an attempt to masquerade as a legitimate client C, according to the inference rule, will receive relevant messages of the form [IDc ,IDtgs] ⇒ IDc, Tickettgs, Ekc{Kc,tgs,Nonce1,IDtgs} where the latter is the authenticator containing the ID of the ticket granting server TGS and a session key shared between the tgs and client C and nonce all encrypted with the public key of the client. Supposedly, only an uncompromised legitimate client can decrypt the authenticator to obtain the session key to be shared with the TGS. This scenario poses some difficulty to an intruder in a different broadcast network from the TGS. However, if by chance the TGS is on the same broadcast network with the intruder, the intruder will only have to send a broadcast including the identity of a targeted principal perhaps learnt earlier by step three of the protocol that is C → TGS : Options║ IDv ║Times║ Nonce2 ║ Tickettgs ║Authenticatorcb, only a valid tgs will come responding with step four of the protocol that is TGS→C :Realmc║ IDc ║Ticketv ║Ekc,tgs[Kc,v║Times║Nonce2 ║Realmv║IDv] From the proposed inference rule, this implies that [IDv , Nonce2 , Tickettgs, Authenticatorcb] Nonce1⇒ IDc, Ticketv, Ekc,tgs{Kc,v,Nonce2, IDv}. The latter is a second authenticator. The intruder arriving at this stage now posses a valid ticket to be used with the application server V together with a session key to be shared with the verifier, which is obtained by decrypting the second. The intruder from there can begin to impersonate the TGS as well as the client C. Steps five and six of the protocol are shown as follows: C → V :Options║Ticketv║Authenticatorcc V → C : Ekc,v[TS2║Subkey║Seq#] Going by the inference rule, these are read as [IDc, Ticketv, Ekc,tgs{Kc,v,IDv}]Nonce2, ⇒ Ekc,v[TS2║Subkey║Seq#]. Thus, an application server responds, with parameters for secured communication, to an intruder having been deceived to believe that it has been communicating with a legitimate client. Model of the principals The notion of concurrency implies that all the varying states of the communicating principals be considered in order to form a valid and realistic model of each principal. However, for simplicity, attention is given to a single run of a protocol where the principal A plays initiator role while the principal B plays the responder. The process description offered in (Hoare, 2004) and (Schneider, 1997) exactly matches the process assumptions made here for the principals. Thus, the models of the principals are as follows: 1. Principal A – the set of process for user A initiating the protocol is given by; USERA = i ∈ user trans.A!i.pi(nA.A) → Rec.A.i?pA(nA.x.i) → trans.A!i!pi(x) → Stop

2. Principal B – playing the responder role is given by the set of process as follows; USERB = rec.B?j?pB(y.j) → trans.B!j!pj(y.nB.B) → rec.B?.j?pB(nB) → Stop The above model implies that principal A reserves a choice of first of all transmitting its nonce and identity to any chosen user destination I through its transmission channel. The message sent through the channels trans and rec are shown, in both cases, to be encrypted using the key pi under public key encryption strategy where i is the user. After this transmission, principal A is then able to receive its nonce, a nonce and identity back from a responder and then to transmit the responder’s nonce to the responder in order to ultimately authenticate itself to the responder before finally stopping. The second is the case for principal B, the responder role. Basically, the responder listens for transmissions on its receive channel, rec, for messages coming from another user say j containing the nonce of j and its identity. Whenever such a message arrives on the channel rec.B, B will send the nonce of j, its own nonce and identity to the user. After this, B will wait to receive its nonce back from the user in order to authenticate that user, and finally, stops. 6. Further work In order to form a complete model of the network and all the participants comprising the legitimate principals and attackers, there still remains further development and validation of the attacker under model with inductive capability in CSP. What has been shown in this paper is the capability of the attacker under assumptions of possible deductions. The deductions are the sets of information that the attacker can directly derive from messages in his possession while the inductions are possible inferences or conjectures that the attacker can make with available information. 7. Conclusion The contribution of this paper is to provide a background for reasoning about attackers with additional potentials for inductions as well as deductions. Basically, the paper offers the basis for modelling attackers in environments with high susceptibility to attacks, which is the situation in societies with economic deprivation. Fundamentally, the growth and survivability of e-commerce play a substantial role in the development and emancipation of economies, therefore, mechanisms that can foster trust on the systems and architectures of deprived economies should evolve to satisfy much needed security requirements. CSP is used to demonstrate the feasibility of modelling authentication protocol participants in such a manner so as to capture their full potentials. This provides a basis to extrapolate the possibilities of what the intruder can achieve with certain knowledge, and where this is achieved, attack pre-emption will become affordable. In as much as the model of an attacker under possible deductions has been shown, further work remains to complete a precise model of an attacker under inductive conditions.

The CSP, communicating sequential processes, using FDR exposed flaws not discovered by preceding approaches (Lowe, 1995), (Ryan and Schneider, 2004), (Hoare, 2004). Having been used to discover a parallel session attack against the Needham and Schroeder seventeen years after its publication, it is the approach that is receiving the highest attention in literature. Significantly, the CSP approach shows potentials for adaptation towards the analysis of complex and specialized protocols. References Bolignano, D., Towards the formal verification of electronic commerce protocols, In the IEEE Proceedings of the 10th CSFW, 1997. Burrows, M., M. Abadi, and Needhan, R.: A Logic of Authentication, ACM Transactions on Computer Systems, 1990. 8(1), (1990), pp. 18-36. Carlie, M., “Economic deprivation, Into the abyss: A personal journey into the world of street gangs”, http://www.faculty.smsu.edu/m/mkc096f/, browsed Feb. 2005. Clark, J., and Jacob, J. (1997). A survey of authentication protocol literature: version 1.0. Debbabi, M., Mejri, M., Tawbi, N., and Yahmadi, I. (1997): From protocol specifications to flaws and attack scenarios: An automatic and formal algorithm, Proceedings of the 6th IEEE Workshop on Enabling Technologies Infrastructure for Collaborative Enterprise, 0-8186-7967-0/97, (1997) Dolev, D. and Yao, A., “On the security of public key protocols”, IEEE Transactions on Information Theory, 29(2), 1983. Eneh, H. A. and Gemikonakli, O. (2005a), Analysis of security protocols for authentication in distributed systems, Proceedings of Conference on Applied Computing IADIS’05, Volume 2, Algarve, Portugal, (2005), pp 301-305. Eneh, H. A. and Gemikonakli, O. (2005b), Pre-emptive authentication protocol design and analysis: Hope for secure communication in societies with economic deprivation, Appearing in the Proceedings of the International Conference on Technology, Knowledge, and Society, Berkeley, February 2005. Eneh, H. A. and Gemikonakli, O. (2005c), An approach for the analysis security standards for authentication in distributed systems, Proceedings of ICEIS 3rd International Workshop on Security in Information Systems, WOSIS’05, Miami, USA, ISBN 9728865252, pp 21-30. Gollmann, D. (1996). What do you mean by entity authentication? IEEE Synposium on Privacy and Security (SP'96), IEEE. Heintze, N. and Tygar, J. D.: A model for secure protocols and their composition, IEEE Transactions on Software Engineering, 22(1), (1996), pp 16-30. Hoare, C. A. R. (2004), Communicating sequential processes, first published by Prentice Hall International in 1985 and MIT Press in 1988. Kizza, J. M., Computer network security and cyber ethics, McFarland and Company Inc. Publishers, ISBN 0-7864-1134-1, 2002. Lampson, B., Abadi, M., Burrows, M., and Wobber, E.: Authentication in distributed systems: Theory and Practice, ACM Transactions on Computer Systems, 10(4), (1992), 265-310.

Levi, A. and Caglayan, M., The problem of trusted third party in authentication and digital signature protocols, ISCIS – XII, Antalya, Turkiye, 1997. Lowe, G.: An attack on the Needham-Schroeder public-key authentication protocol, Information Processing Letters, 56, (1995), 131-133. Lowe, G.: A hierarchy of authentication specifications, Proceedings of the 10th Computer Security Foundations Workshop, (CSFW ’97), 1063-6900/97, (1997) Mao, W. and Boyd, C.: Towards formal analysis of security protocols, Proceedings of Computer Security Foundation Workshop VI, (1993), pp 147-158. Meadows, C.: Extending formal cryptographic protocol analysis techniques for group protocols and low level cryptographic primitives, Proceedings of the 1st Workshop on Issues in the Theory of Security, Geneva, (2000), pp 87-92. Meadows, C., “Formal verification of cryptographic protocols: A survey”, in the Proceedings of Asiacrypt’96. Meadows, C.: Applying formal methods to the analysis of a key management protocol, Journal of Computer Security, 1(1), (1992), pp 5-35. Peters, M. E., Emerging eCommerce credit and debit card protocols - It was published via this symposium: http://ecommerce.ncsu.edu/ISEC/program.html in October 2002. Perrig, A., and Song D., “Looking for diamonds in the desert - Extending automatic protocol generation to three-party authentication and key agreement protocols”, 13th IEEE Computer Security Foundations Workshop (CSFW'00), 2000. Perrig, A., and Song D., “A first step towards the automatic generation of security protocols”, Proceedings of IEEE Symposium on Security and Privacy,1998. Ray, I., and Ray, I., Failure analysis of an ecommerce protocol using model checking, IEEE, ISBN 076950610, 2000. Rubin, A. D. and Honeyman, P.: Formal methods for the analysis of authentication protocols, Technical Report, CITI TR 93 – 7, (1993). Ryan, P., and Schneider, S., Modelling and analysis of security protocols, Pearson Education Limited, Great Britain, ISBN: 0201674718, 2001 Schneider, S. (1997). Verifying authentication protocols with CSP. 10th Computer Security Foundations Workshop (CSFW'97), IEEE. Stallings, W.: Cryptography and network security: Principles and practices, Third Edition, Prentice Hall, New Jersey, (2002). Tobler, B. and Hutchison, A. C., “Generation of network security protocols from formal specifications”, Proceedings of CSES-04, International Workshop on Certification and Security, Kluwer 2004