Fuzzy Quantification of Trust - Semantic Scholar

2 downloads 0 Views 205KB Size Report
An agent, X, plans the sequence of actions to be carried to achieve ... The relation of trust between trustier [3, 4] agent X and the trustee agent Y is not one is to ...
Fuzzy Quantification of Trust Punam Bedi Department of Computer Science, University of Delhi, Delhi – 110007, India. Email: [email protected] Harmeet Kaur Department of Computer Science, Hans Raj College, University of Delhi, Delhi- 110007, India. Email: [email protected]

Abstract The presence of trust in human society is unquestionable and as agents are designed to behave like humans, trust should play an important role as a characteristic of an agent. Trust in humans is subjective and nondichotomous. The linguistic terms are generally used by humans to discuss trust. However, quantification of trust is required to make the concept usable for the agents. This paper proposes a trust model that uses the fuzzy set theory to compute trust from the linguistic terms. The non-dichotomous nature of trust is taken care of by the fuzzy sets. The capability of the trustee, the past experience in dealing with the trustee, the recommendations of experts and the external factors are used in the proposed trust model to compute trust. Keywords : Trust, Agent, Fuzzy, Linguistic terms, Referral trust

1.

Introduction

The Oxford dictionary defines trust as “n. firm belief that a person or thing maybe relied on; confident expectation; responsibility”. In literature, trust is defined as a subjective probability by which an individual expects that the other entity (the entity can be a human, a software agent, an organization or a machine) will perform a given action on which its welfare depends, i.e., some predictability is expected in the behavior of the other entity. Trust is subjective in human beings, but for agents to be able to make use of trust, some quantification is required. The researchers are using various models to evaluate trust. An example on file downloading is given and the parameters, like downloading speed, etc., that are relevant to a trustworthy interaction in file downloading are considered in Bayesian Network - Based Trust Model [16]. A specific application of computing trust of a doctor using Fuzzy Cognitive Maps with the fuzzy parameters is given in Cooperating through Belief - based Trust Computation [3]. In this paper, ability, availability and unharmfulness are considered as internal factors; and opportunity and danger are considered as the external factors. An algorithmic form for computation of trust, using past experiences and recommendations from other agents is given in Trust Based Knowledge Outsourcing for Semantic Web Agents [8]. External factors are considered in [3] and [4] but not much emphasis is given to them, which can alter the behavior of even highly trustworthy entities. When we interact with other human beings to inquire about the trustworthiness of an entity, we don’t expect the answer to be a numerical value rather we expect a linguistic term [1], say HIGHLY TRUSTWORHY, etc. So when we are designing intelligent agents that behave like humans then preferably they should interact with each other using the same terminology as humans to make them more believable. In literature, some work on trust based on fuzzy concept [3, 4] is done, but these papers also do not consider fuzziness in terms of linguistic variables. In our paper, a generic trust model is proposed in which various external factors are taken into consideration in addition to capability of the trustee, past dealings with the trustee and the recommendations from other entities about the trustee. This paper is organized as follows. In Section 2, we discuss the importance of trust in humans and agents. The Sections 3, discusses how trust depends upon capability, the Section 4 gives the influence of past experience in dealing with the trustee agent and the Section 5, discusses the impact of recommendations about the trustee agent from other agents. The Section 6 discusses the proposed the trust model based on fuzzy set theory. The influence of external factors on computation of trust is also discussed in this section.

1

2.

Importance of Trust

The presence of trust in human society is unquestionable [13]. We are a part of the society that depends upon others to do their work so that all of us can survive. For example, we trust others to go to their work places so that there is proper food supply, transport system, etc. for us to use. An individual is a consumer of some services provided by others and a producer of some other services to be consumed by others. We trust others to contribute to the stability of the society by doing their work and vice-versa they expect us to do our work for the stability of the society. Nowadays, as a single human being, we are not self sufficient, i.e., we have specialized skills. Hence, one cannot be a farmer, driver, mechanic, manager, etc. at the same time. Only a finite number of expertises are present in a human being. Similarly, in agents, especially mobile (agents whose work domain is the Web), for efficiency considerations, the size of the code and knowledge base has to be minimum possible. To minimize the code size, agents can be designed in such a way that they have finite specialized skill sets and they depend upon other agents to achieve their goal(s). An agent, X, plans the sequence of actions to be carried to achieve goal(s) and if the required actions are not a part of the agent’s skill set or if the cost of carrying out the actions is more, if carried out by X itself, then X delegates the task to agent Y. The relation of trust between trustier [3, 4] agent X and the trustee agent Y is not one is to one; rather X may have more than one opinion about Y depending upon the kind of work to be assigned to Y. As in real life, we may have high trust on a doctor to treat the diseases but may not trust the doctor to repair a damaged car. Similarly, the agent Y need not be assigned a single trust value by X. Y is assigned a trust value for every member of the finite specialized skill set. To keep the knowledge base small, X need not store trust values for every member of skill set of every known agent (Section 3), rather it is evaluated dynamically (Section 6) and some information about the agents, X has interacted with, is stored in the knowledge base. This requires extra processing by X, but ensures that always the recent information is taken into consideration. For example, if an agent, Y, acquires extra set of skills or improves its competence level in a specific skill, then the latest information about Y is used to evaluate trust on Y. The agent, X, has to keep a record of the previous dealings (Section 4) and ratings (Section 5), etc. of some of the agents, who have interacted with it earlier. To keep the knowledge base up to date, a temporal dimension [2] is added to the knowledge base. The information is stored in the knowledge base along with the time of interaction and at regular intervals, the old entries, i.e., the entries in the knowledge base before a specific time are deleted. This is to take into account, the cases when some of the agents are no longer alive or have not interacted with X for a long period as they may have changed their skill set and the new skill set is not required by X. This also ensures that the size of the knowledge base is kept under control by not having any extraneous information in it. The agents, except for trivial application domains, require interaction with other agents, i.e., application domains require multi-agent systems [17]. This interaction is required basically for • Delegation of tasks • Seeking advice / knowledge from other agents For both of these, trust between the interacting agents is necessary. In fact, X will delegate task or depend upon the knowledge provided by Y, only if it trusts Y. In humans, trust is subjective and non-dichotomous [13]. There is no black and white in trustworthiness; rather it represents a range and not a unique value. For example, this doctor is very skilled implies there is a range and not a unique numeric value. Now, how we derive this fact about a doctor depends upon many factors – say educational qualification of the doctor, our previous experience with the doctor and the opinion of the other patients about the doctor. From this, it can be seen that trustworthiness is not dependent only on a single factor, say capability or skill, many other factors also play an important part in formulating trust on an entity. Also, if two or more possible trustee agents have the same skill, then X should select the agent, which can maximize the profitability with minimum risk. This is because when we trust others, we risk our well being on them, their failure may result in our failure. The factors that affect building of trust on an entity are discussed in the following sections.

2

2.1

Degree of trust

As discussed in Section 2 above, trust is not dichotomous; rather it is a range. This means that trust may be graded and the extent to which one is trustworthy is the degree of trust (DoT) of the entity. In [5, 6], degree of trust is defined as a subjective certainty of the pertinent beliefs. In other words, degree of trust is the degree of predictability of the expected behavior of the other entity. The various factors on which trust depends are as follows: • Capability of the agent • Past Experience • Recommendations from other agents In addition to these, the external factors also play an important part in the decision making process. We may not doubt the trustworthiness of an entity, but sometimes the external factors are such that it is possible the entity might not behave in the expected manner. The external factors may be favorable or adverse for the entity, because of which it may work better or worse than expected. For example, if a doctor is called for an emergency and if he/she gets stuck in a traffic jam, we cannot say that the doctor is not trustworthy, rather the external factors made him/her behave that way. The external factors are generally beyond our control, but this does not mean we should plan without taking into consideration the effect of external factors. So, when task delegation is done or we seek information from other agent, appropriate consideration must be given to external factors and to the remedial steps that may be taken in case of any problem. This way the system becomes more robust. To take into account the interval type nature of trust, we are using fuzzy set theory to evaluate the degree of trust derived from the factors (these factors are also given a fuzzy treatment) mentioned above and the degree of trust obtained from the trust model we are proposing, is also fuzzy. All these factors are in the form of linguistic terms and the degree obtained finally is also translated to linguistic term. For example, the capability information that is obtained from yellow page agents about the trustee agent, Y, is in the form of one of the linguistic terms HIGHLY COMPETENT, COMPETENT, NOT COMPETENT, etc. Similarly, other factors are also in linguistic terms. These factors are discussed in detail in the sections from 3 to 5. The linguistic terms are just the approximate values and the trapezoidal membership functions are good enough to capture the vagueness of those linguistic assessments, since it may be impossible or unnecessary to obtain more accurate values [7]. Hence, we are using the trapezoidal possibility distribution (Fig. 1) for the linguistic terms in all the cases. COMPETENCE 1 Membership 0 α β γ δ Fig. 1: Possibility Distribution for a Linguistic Term COMPETENCE

3.

Capability of the agent

A trust value is assigned for every member of the finite specialized skill set of an agent. The skill sets of the agents are not mutually exclusive, i.e., more than one agent can have the same skill, but it is not necessary that they are equally competent in performing the specific task. For example, one of the agents may be using more efficient algorithm to get the work done. Hence, the agents even with the same skill are given different trust values according to their competence. This grading is used to select the agent, which will be delegated work.

3

3.1

Source of information about the capabilities of the agents

In the working environment of the agents, how can a trustier agent, X find about the capability of the trustee agent, Y? One option is that Y itself provides the information to X. But how does Y know that an agent X needs its services? Similarly, agent X cannot inquire from every agent in its working environment as to whether they possess a particular skill or not and if yes, then what is their competence in it? To overcome this problem, a concept similar to the three party trust model [5, 6] is proposed. There are sources of information (about the capabilities of the agents) in the form of yellow page agents or information agents, in the working environment. Hence, there are three parties in the fray – the trustier agent, X; the trustee agent, Y and the information agents or yellow page agents (Fig. 2). Information Agents or Yellow Page Agents Gives information about Y

Registers with

Trustier Agents (X)

Trustee Agent (Y) Interact Fig. 2: Relationship between the trustier agent, X; the trustee agent, Y and the information agents or yellow page agents The information agents or yellow page agents keep the information such as field of expertise and the competence level of the other agents. When an agent, Y, comes into existence, it registers with one of the yellow page agents. On registration, an agent has to provide the information about its expertise and capabilities to the yellow page agent. For such a setup to work, X has to trust the yellow page agent for the information it provides about Y and further, the yellow page agent has to trust Y for the information it provides about itself. This again becomes a problem of evaluating trust of the other agents; in this case it is the trustworthiness of the sources of information. To overcome this problem, there are certification agents that certify the competence of Y. The yellow page agents verify the claims of Y, from the certification agents. To put Y in its list of the competent agents, the certification agents tests Y and according to the performance of Y, its competence is evaluated. As in digital signatures verification [15], there is a hierarchy of Certification Authorities (CAs), here also there is a hierarchy of certification agents. An agent at a given level, certifies the agents directly under it. This increases the trust in the system. For an agent to be able to make use of the services of the yellow page agents, it should know about their existence. One of the possibilities is that the information about the yellow page agents is hard coded in the agents. It is not necessary that the information about all the yellow page agents is coded in all the agents, rather similar to the CAs there is a hierarchy of yellow page agents with only the root yellow page agent’s information coded in the agents and a yellow page agent at a given level in the hierarchy keeps a record of the agents directly under it.

3.2

Type of information provided by Certification Agents

The certification agents provide information in the form of linguistic terms, i.e., the certification agents give information to the yellow page agents about Y as Y is HIGHLY COMPETENT, COMPETENT, NOT COMPETENT, etc. in performing say a job, J. The yellow page agents check this information with the information provided by Y and if found correct then they further provide this information to X. If the information provided by Y does not match with the one provided by certification agents, then yellow page agent need not register Y. This information is, further used by X in evaluating trust on Y. The information exchanged is not in natural language, rather only the linguistic terms (CAPITALIZED words) are exchanged. We propose the following formula to be used by the trustier agent, X, to find the competence, C, C=(γ-β)/(δ-α) ............................................ (1) Here, α, β, γ and δ are the parameters of X corresponding to the linguistic term recommended by the yellow page agents.

4

The competence, C, is taken as the ratio of the range in which the membership value to the linguistic term is 1 to the entire range where the membership value is non-zero.

4.

Past Experiences

This is the temporal dimension of trust and is not valid for agents, which are under consideration for the first time. Since there is a temporal component of trust, we have to keep a record of the previous interactions between X and the agent interacted. We propose that the knowledge base store the experience of the previous interactions along with the time of interaction, and this stored experience be used to evaluate trust of the agents to determine whether there should be a further interaction with the agent or not. If it is decided that out of n agents, the agent Y is selected for delegation of work then the experience value of only Y is updated in the knowledge base. The knowledge base is updated after the culmination of the interaction (Section 4.1). The ratio of successful interactions to the total number of interactions [16], is used to evaluate trust. We are not considering interactions as only satisfactory or not satisfactory, rather fuzzy grading is done and the satisfaction level of the interaction is determined. In order to find the satisfaction level, the agent X decides about the decisive factors (and if required, the importance / weights to be given to every decisive factor) that it considers are important for a satisfactory interaction. Y is then marked on these factors and the interaction is classified accordingly in the range from 0 to 1, which gives the satisfaction level of the current interaction. 4.1

Updating Knowledge base

The agent, X, has to keep the record of the interaction with Y for future reference. Rather than storing the linguistic term and the time of every interaction, a consolidated effect of the interactions along with the time of latest interaction is stored in the knowledge base. To find this consolidated effect, the following formula is used: Et+1 = Et ∩ St ……………………………………….. (2) Here, ∩ is the fuzzy intersection operator, Et+1 is the new cumulative satisfaction level of the interaction to be stored in knowledge base, Et is the current value of satisfaction level of the interaction already stored in knowledge base, St is the satisfaction level of the current interaction, and Et+1, Et, St ε [0, 1]. If X has interacted with Y for the first time, then the value of Et can be set to 0 or to some other fixed value in the range from 0 to 1. The time of the latest interaction is stored in the knowledge base and it is periodically cleaned of the inactive entries, i.e., the entries before a specific time, to keep the size of the knowledge base under control.

5.

Giving and Receiving Recommendations to / from others

In our system, an agent plays the role of a trustier as well as a referee. As a trustier, it seeks the recommendations from other agents and as a referee, gives recommendations.

5.1

Receiving Recommendations from others

When X comes across an agent, Y, which is not known to it earlier, it may seek opinion about Y from other expert agents (agents having the expertise about the relevant field in which Y is expected to work) known to it. In case, X wants a larger set of experts (in addition to the ones already known to it), the yellow page agents help X in providing reference of more agents that are experts in the relevant field. Now, it is not necessary that the opinion of every such agent is taken into consideration; rather X selects the set of referees from this pool of experts and even assigns different importance in the form of ratings to the

5

opinions of different expert referees. The knowledge base stores the field of expertise of referee agents, which is found using the yellow page agents and the ratings to be given to their opinions. The referee may STRONGLY RECOMMEND, RECOMMEND WITH RESERVATIONS, RECOMMEND, NOT RECOMMEND, etc. The parameters α, β, γ and δ used to represent the trapezoidal possibility distribution for a linguistic term used by X and the referee agent, R, need not be same, the updation of rating takes care of it. For example, if X observes that R is STRONGLY RECOMMENDing Y, but Y does not perform satisfactorily according to the criteria of X, then X decreases the rating to be given to the opinion of R (Section 5.1.1), this indirectly maps α, β, γ and δ’s of the possibility distribution of the linguistic terms of X to the corresponding parameters of R. Now, if more than one referee gives their opinion about Y, then it is quite possible that the experts may not agree with each other about the trustworthiness of Y. For such cases, the concept of fuzzy (linguistic) majority [7, 10] can be used when the experts do not agree with each other.

5.1.1 Updating referral trust The referral trust is the trust of the trustier on the referee to provide trustworthy information about the trustee agent. This referral trust is updated, as in the case of satisfaction level, after the culmination of the interaction. The trustier agent grades the performance of the trustee agent according to its own criteria and the difference between this grading of its own and that provided by the referee is used to update the rating of the referee. In other words, if the trustier agent, X has to act as a referee for another agent Z, then what is its recommendation to Z (Section 5.2), is used to determine the value, φ, with which the ratings must be updated. The following formula is used to update the rating of R: Rt+1 = Rt + φ …………………..…………………. (3) Here, Rt is the old rating used in evaluating the fuzzy majority, Rt+1 is the new rating to be stored in the knowledge base for future interactions, φ is the increment or decrement in the rating (Table 1), and Rt, Rt+1, φ ε [0, 1] The rating, Rt+1, is set to 1 if its value exceeds 1. If an agent has acted first time as a referee, then Rt can be set to 0 or to a constant value. Now, to calculate φ, Table – 1 is used (only some sample values are taken). The values in the table are subjective and different agents may use different values of φ for the conditions mentioned in first two columns. Linguistic term suggested by the Linguistic term to be suggested to agent Z by agent X referee, R to agent X (referee) about agent Y (Section 5.2) STRONGLY RECOMMEND STRONGLY RECOMMEND STRONGLY RECOMMEND RECOMMEND WITH RESERVATIONS STRONGLY RECOMMEND NOT RECOMMEND RECOMMEND STRONGLY RECOMMEND RECOMMEND RECOMMEND RECOMMEND NOT RECOMMEND NOT RECOMMEND RECOMMEND WITH RESERVATIONS NOT RECOMMEND RECOMMEND NOT RECOMMEND STRONGLY RECOMMEND Table 1: Table for updating rating of referees

φ +0.2 +0.1 -0.2 0.1 +0.2 0.0 +0.1 0.0 -0.2

At regular intervals, the information of the referees with rating 0 is removed from the knowledge base. This ensures that the knowledge base size remains in control and the information about the referees whose opinions do not matter are removed from the knowledge base.

6

5.2

Acting as a referee

The value of trust is evaluated using recommendations from the other agents. Similarly, an agent, X, may have to act as a referee for other agents and provide the knowledge it has about an agent, Y, to the other agent. For this, only past experience of the referee agent, X (Section 4) in dealing with Y should be considered. This is true for us also, when we seek the opinion of another individual about an entity, we are basically interested in knowing how good or bad was the dealings of the referee with the entity under consideration. Et+1 (Section 4.2) is between .9 and 1 is between .4 and .9 is between .2 and .4 is between 0 and .2

The linguistic label to be returned to the inquiring agent STRONGLY RECOMMEND RECOMMEND RECOMMEND WITH RESERVATIONS NOT RECOMMENDED Table 2: Table for mapping the experience to linguistic terms

This table is also subjective and again different agents may use different values, but the receiver agent through the rating of the agent will indirectly map them to its own parameter set.

6.

To trust or not to trust

With all the factors above, how do we conclude, whether to trust or not to trust an agent? In case, if more than one agent is under consideration, then how do we select one agent out of them? In this paper, we propose the degree of membership to the set trustworthiness, i.e., degree of trust (DoT), be used to find out a suitable trustee agent. The degree of trust (DoT) is defined in section 2.1 and this DoT depends upon certain fuzzy factors discussed in Sections 3 to 5. We propose the following formula to quantify the degree of trust of the trustier agent X on the trustee agent Y, given as DoT(X, Y), using the factors described earlier: DoT(X, Y) = a * C ∩ b * Et+1 ∩ c * M …………….. (4) Here, ∩ is the fuzzy intersection operator, a, b, c are the weights for capability, past experience and recommendation received from other agents, respectively, C is the capability factor (equation (1)), Et+1 is the contribution of past experience (equation (2)), M is the fuzzy majority of the recommendations of referees (Section 5), and a+b+c=1 Now, to find a suitable agent, we propose the following. There is a threshold for the trust value below which an agent is not acceptable. If there is a single agent under consideration and if its DoT comes out to be lower than the threshold value, then the agent under consideration is rejected. Similarly, from a set of candidate agents, the one with the highest DoT, provided it is greater than the threshold, is selected for work. It is quite possible that none of the agents surpass the threshold set for the degree of trust, in that case X has to start again with a new set of agents. However, this procedure may fail if the agent cannot find agents with DoT above threshold value and is also not able to find new set of agents to start the evaluation process again. The knowledge base need not store the DoT of the agents it comes across, rather DoT should be dynamically generated whenever required. This increases the processing requirements but the latest information of all the factors is used to compute trust.

6.1

External factors

External factors can influence the behavior of even a highly trustworthy agent. When we trust an entity, basically we are risking our well being on that entity. What if the trustee fails to do the required job? In this

7

case, in addition to the trustier agent even the trustee agent may also lose something (say the successful termination may have resulted in some monetary exchange between the agents). The trustier agent must be prepared to take appropriate steps to minimize the negative effects, if any, so that the entire operation does not fail. It is not necessary that the trustee agent fails only because of the external factors. There can be many more factors that can cause the failure. For example, (harmful) agents may be specifically designed in such a way that they harm the other agents like the computer virus. The intentions of the trustee agents are questionable in such cases. However, in this paper we are restricting our discussion to the agents that do not have negative intentions. This way of handling the external factors is similar to risk management where we try to predict the types of risks possible for the application, their impacts, and the measures that may be taken to minimize the damage. By handling the external factors with appropriate measures for damage control, the robustness of the system increases. In the proposed trust model, to take into account the possible impact of the external factors, either the DoT(X, Y) may be adjusted by multiplying it with an appropriate factor or the threshold may be set accordingly. For example, if an agent perceives that the external factors are not favorable, then the threshold value may be set to a higher value so that those agents that are more persistent or in other words, more trustworthy are selected.

6.2

Trusting Newcomers

The trustier agent, X treats the agents that are introduced first time into the system (case 1) or have not interacted with it (case 2), as newcomers. In case 1, only their capability and the external factors are relevant for evaluating trust. In case 2, in addition to these factors, the opinion of other agents that know the new agent, is taken in account. Hence, in case 1, past experience and recommendation are not considered and in case 2, past experience is not considered in equation (4).

7.

Conclusions

In this paper, we have tried to give a computational form to trust evaluation using fuzzy set theory. The humans are increasingly using computers as an extension of themselves. Especially in e-commerce applications, the users depend upon the agents to do sale / purchase on the Web on their behalf. When humans deal with one another, trust is the basis of the interactions, and if agents have to work as humans, then they also have to learn as to how to use trust to get the work done and protect its interests from other agents. We are in the process of implementing a prototype with a small set of agents that uses the trust model proposed in this paper as the basis for interaction between the agents. Trust finds applications in contract nets also. During bidding not only the quotation but along with it, the trust should also be used in deciding as to whom to give the contract.

8.

References

[1]

Bedi P., Kaur H., Malhotra A., “Fuzzy Dimension to Databases”, Harnessing and Managing Knowledge, 37th National Convention of Computer Society of India, 2002 Bedi P., Sharma K.D., Kaushik S., “Time Dimension to Frame Systems”, Journal of Information Science and Technology, Vol.2, No.3, April 1993, pp.212-228. Castelfranchi C., Falcone R., Pezzulo G., “Cooperating through a Belief-based Trust Computation”, in the Proceedings of Twelfth IEEE International Workshop on Enabling Technologies : Infrastructure for Collaborative Enterprises, Edinburgh, Scotland, UK, 2003 Castelfranchi C., Falcone R., Pezzulo G., “Trust in Information Sources as a Source for Trust : A Fuzzy Approach”, in the Proceedings of Autonomous Agents and Multi-Agent Systems (AAMAS’03), Melbourne, Australia, 2003, pp 89-96 Castelfranchi C., Falcone R., “Social Trust : A Cognitive Approach “, in Trust and Deception in Virtual Societies by Castelfranchi C. and Yao-Hua (eds), Kluwer Academic Publishers, 2001, pp. 55 – 90 Castelfranchi C., Falcone R., “Principles of trust for MAS : Cognitive anatomy, social importance, and quantification”. In Proceedings of the Third International Conference on Multi-Agent Systems, Paris, France, 1998, pp. 72-79.

[2] [3] [4] [5] [6]

8

[7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18]

Delgado M., Herrera F., Herrera-Viedma E., Verdegay J.L, Vila M.A., “Aggregation of Linguistic Information Based on a Symbolic Approach”, in Computing with Words in Information/Intelligent Systems 1 by Zadeh L A. and Kacprzyk J. (eds.), Physica-Verlag Heidelberg, 1999 Ding L, Zhou L., Finin T., “Trust Based Knowledge Outsourcing for Semantic Web Agents”, in the Proceedings of IEEE/WIC International Conference on Web Intelligence, Beijing, China, 2003 Jennings N.R., Wooldridge M.J., “Applications of Intelligent Agents”, in Agent Technology – Foundations, Applications and Markets by Jennings N.R., Wooldridge M.J. (eds.), Springer-Verlag Berlin, 1998 Kacprzyk J., “Group decision-making with a fuzzy linguistic majority”, in Fuzzy Sets and Systems, 18, 1986, pp 105 -118 Klir G.J, Yuan B., “Fuzzy Sets and Fuzzy Logic : Theory and Applications”, Prentice-Hall, Inc., Englewood Cliffs, N.J., U.S.A., 1995 Lu ., Lu Z., Li Y., “TRUST! – A distributed multi-agent system for community formation and information recommendation”, in IEEE International Conference on Systems, Man, and Cybernetics, Volume3, 2001, pp1734-1739 Marsh S., “Formalizing Trust as a Computational Concept”, Ph.D thesis, University of Stirling, UK, 1994 Nwana H.S., “Software Agents : An Overview”, in Knowledge Engineering Review, Volume 11, No. 3, Cambridge University Press, 1996, pp 1- 40 Perlman R., “An overview of PKI Trust Models”, in IEEE Network, November/December 1999, pp 38-43 Wang Y., Vassileva J., “Bayesian Network - Based Trust Model”, in the Proceedings of IEEE/WIC International Conference on Web Intelligence, Beijing, China, 2003 Wooldridge M.J., “An Introduction to Multi-Agent Systems”, John Wiley and Sons, West Sussex, England, 2002 Zimmerman J., “Fuzzy Set Theory – And It’s Applications”, Kluwer Academic Publishers, Norwell, Massachusetts, USA, 2001

9