OASIS Specification Template



[pic]

ORMS: Use-cases Version 0.21

Working Draft, 27 October 2008

Specification URIs:

This Version:

.html

.doc

.pdf

Previous Version:

.html

.doc

.pdf

Latest Version:

.html

.doc

.pdf

Latest Approved Version:

.html

.doc

.pdf

Technical Committee:

OASIS Open Reputation Management Systems (ORMS) TC

Chair(s):

Anthony Nadalin

Nat Sakimura

Editor(s):

Mahalingam Mani

Related work:

This specification replaces or supercedes:



This specification is related to:



Declared XML Namespace(s):

Abstract:

Towards arriving at a standard protocol for exchanging reputation information between reputation data providers and consumers and a portable reputation data and meta-data format, a reference model is described. The model is evaluated and validated with use-cases to arrive at requirements for a portable reputation provider data format that ensures openness in ownership, privacy and confidentiality protection and management of reputation data.

Status:

This document was last revised or approved by the on the above date. The level of approval is also listed above. Check the “Latest Version” or “Latest Approved Version” location noted above for possible later revisions of this document.

Technical Committee members should send comments on this specification to the Technical Committee’s email list. Others should send comments to the Technical Committee by using the “Send A Comment” button on the Technical Committee’s web page at .

For information on whether any patents have been disclosed that may be essential to implementing this specification, and any offers of patent licensing terms, please refer to the Intellectual Property Rights section of the Technical Committee web page (.

The non-normative errata page for this specification is located at .

Notices

Copyright © OASIS® 2007. All Rights Reserved.

All capitalized terms in the following text have the meanings assigned to them in the OASIS Intellectual Property Rights Policy (the "OASIS IPR Policy"). The full Policy may be found at the OASIS website.

This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published, and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this section are included on all such copies and derivative works. However, this document itself may not be modified in any way, including by removing the copyright notice or references to OASIS, except as needed for the purpose of developing any document or deliverable produced by an OASIS Technical Committee (in which case the rules applicable to copyrights, as set forth in the OASIS IPR Policy, must be followed) or as required to translate it into languages other than English.

The limited permissions granted above are perpetual and will not be revoked by OASIS or its successors or assigns.

This document and the information contained herein is provided on an "AS IS" basis and OASIS DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY OWNERSHIP RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

OASIS requests that any OASIS Party or any other party that believes it has patent claims that would necessarily be infringed by implementations of this OASIS Committee Specification or OASIS Standard, to notify OASIS TC Administrator and provide an indication of its willingness to grant patent licenses to such patent claims in a manner consistent with the IPR Mode of the OASIS Technical Committee that produced this specification.

OASIS invites any party to contact the OASIS TC Administrator if it is aware of a claim of ownership of any patent claims that would necessarily be infringed by implementations of this specification by a patent holder that is not willing to provide a license to such patent claims in a manner consistent with the IPR Mode of the OASIS Technical Committee that produced this specification. OASIS may include such claims on its website, but disclaims any obligation to do so.

OASIS takes no position regarding the validity or scope of any intellectual property or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; neither does it represent that it has made any effort to identify any such rights. Information on OASIS' procedures with respect to rights in any document or deliverable produced by an OASIS Technical Committee can be found on the OASIS website. Copies of claims of rights made available for publication and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of this OASIS Committee Specification or OASIS Standard, can be obtained from the OASIS TC Administrator. OASIS makes no representation that any information or list of intellectual property rights will at any time be complete, or that any claims in such list are, in fact, Essential Claims.

The names "OASIS", are trademarks of OASIS, the owner and developer of this specification, and should be used only to refer to the organization and its official outputs. OASIS welcomes reference to, and implementation and use of, specifications, while reserving the right to enforce its marks against misleading uses. Please see for above guidance.

Table of Contents

1 Introduction 6

1.1 Terminology 6

1.1.1 ORMS Definitions 6

1.2 Normative References 8

1.3 Non-Normative References 8

2 Overview 10

3 ORMS Reference Model 11

4 Use-cases 13

4.1 OpenID in Trusted Exchange 13

4.1.1 Actors 13

4.1.2 Description 13

4.1.3 Input 13

4.1.4 Output 13

4.2 IdP (Identity Provider) Reputation Service 13

4.2.1 Actors 13

4.2.2 Description 13

4.2.3 Input 14

4.2.4 Output 14

4.3 Content Filtering 14

4.3.1 Actors 14

4.3.2 Description 14

4.3.3 Input 14

4.3.4 Output 15

4.4 Second Life Avatars 15

4.4.1 Actors 15

4.4.2 Description 15

4.4.3 Input 15

4.4.4 Output 15

4.5 Nodes in Second Life Grid 15

4.5.1 Actors 15

4.5.2 Description 15

4.5.3 Input 16

4.5.4 Output 16

4.6 Social-network derived Peer reputation 16

4.6.1 Actors 16

4.6.2 Description 16

4.6.3 Input 16

4.6.4 Output 16

4.7 Digital Signature (signing key) reputation 16

4.7.1 Actors 16

4.7.2 Description 16

4.7.3 Input 17

4.7.4 Output 17

4.8 Peer Reputation in P2P Networks 17

4.8.1 Actors 17

4.8.2 Description 17

4.8.3 Input 17

4.8.4 Output 17

4.9 Seller Reputation 17

4.9.1 Actors 17

4.9.2 Description 17

4.9.3 Input 18

4.9.4 Output 18

4.10 Reputee Influence: Social & Professional Networks 18

4.10.1 Actors 18

4.10.2 Description 18

4.10.3 Input 18

4.10.4 Output 18

4.11 Adaptive Trust: Enterprise unified communications (UC) 18

4.11.1 Actors 18

4.11.2 Description 19

4.11.3 Input 19

4.11.4 Output 19

4.12 Federated Trust in UC 19

4.12.1 Actors 19

4.12.2 Description 19

4.12.3 Input 20

4.12.4 Output 20

4.13 Peer-peer reputation (between actors) 20

4.13.1 Actors 20

4.13.2 Description 20

4.13.3 Input 20

4.13.4 Output 20

5 Security and Privacy considerations 21

A. Acknowledgements 22

B. Non-Normative Text 23

C. Revision History 24

Introduction

Social and Corporate networking interactions in the Internet age have given rise to an exponential growth in real-time and asynchronous communications. The openness of the good-faith protocols and networks are now increasingly exposed to the threats and exploits of the community.

Moreover, corporate networks and social networks are required to deal with a range of users with roles and privileges varying dynamically in time and (network) domain requiring corporations to adjust to the wired and wireless network, traditional and virtually-extended perimeters, extranets, federations and partner-portals involving considerable degree of transitive trust.

An framework is required to identify and qualify

➢ accidental, well-behaved and malicious privilege/usage patterns and

➢ quantify (or trust-score) the above patterns to facilitate (social and corporate network) services adapt trust levels and authorized accesses to resources.

Interoperable trust-scoring mechanism is required to relate

This document describes use-cases in varying scenarios based on existing e-commerce transaction systems, social networks and converged communications scenarios ranging from corporate enterprise networks to peer-peer networks.

The use-case section is preceded by ORMS Terminology and Overview sections and also a Reference Model in aiding the discussion of use-cases.

1 Terminology

The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in [RFC2119].

1 ORMS Definitions

Actor

Particpating entities in a transaction. For example, in the Reputation Systems context, the reputation scoring-service (Provider or reputer), the service using the Reputation Provider (relying party) and the client being evaluated.(reputee)

Avatar

An Avatar is an incarnation, embodiment or virtual manifestation of an actor’s profile in a Social or Professional Network Domain.

Alternate definition: a computer user's representation of himself or herself, whether in the form of a three-dimensional model used in computer games, a 2-dimensional icon (picture) used on internet forums and other communities, or a text construct found on early systems[1]

UC

Unified Communications. A field that describes convergence of IP-based multi-media including voice, video and data.

Online reputation mechanisms

Online reputation mechanisms, also known as reputation systems (Resnick et al., 2000; Dellarocas, 2003a), are using the Internet’s bi-directional communication capabilities in order to artificially engineer large-scale word-of-mouth networks where individuals share opinions and experiences on a wide range of topics, including companies, products, services, and even world events. (Dellarocas (2005) )

Reputation Systems

See Online reputation mechanisms.

Reputation

Reputation is a concept that arises in repeated game settings when there is uncertainty about some property (the “type”) of one or more players in the mind of other players. (Wilson (1985))

Reputation Score

A Reputation Score of a Player (Reputee) on the Type (Criteria) by other players (Reputor) is the subjective probability assigned by the Reputor that the Reputee fulfils the Criteria. (Sakimura (2008))

Reputation (alternate definition)

Reputation is a collective evaluation of an entity based on factual and/or subjective data about it, and is used as one of the factors for establishing trust on that entity for a specific purpose.

Reputation is a metric (a score, a rank, a state, a multi-dimensional profile, etc.) associated to an entity (a person, a business, a digital identity, a website, a system, a device, a category of devices, a computing resource, etc.) or to a tuple [entity, attribute(s)] (e.g. [person,skill]) in a particular domain and at a particular moment in time.

Reputation domain (or Reputation Name-space)

The encompassing domain where a reputation is defined (to be refined)

Reputation Compute-Engine

A reputation for an entity is computed using a reputation calculator, based on different types of input data about the entity (available within the domain or imported into the domain). The reputation calculator combines and weights one or more input data about the entity, according to a reputation algorithm and contextual information available at the time of computation.

Contextual information

(to be defined)

Reputation algorithm

a domain-specific algorithm for computing reputations. A reputation algorithm is designed taking into account the characteristics of the encompassing domain: topology (centralized or distributed reputation computation), entities to which the reputation is associated, entities that produce input data, entities that consume reputations, available types of input data, type of contextual information available, desired properties of the computed reputation (robustness, fairness, etc.).

Reputation (input) data

the data upon which the reputation is computed. Input data can be of different types, for example:

➢ Subjective data about the entity: e.g. ratings and feedback from peers, claims and endorsements

➢ Static and dynamic characteristics of the entity: e.g. demographics, preferences

➢ Behavior data, stemming from measurements and observations within a system: e.g. logs of entity’s past actions, history of interactions, parametric data

➢ “Real world” data about the entity: e.g. background checks, credit history

➢ Inferred data about an entity: e.g. text analytics

Reputation Management System

(to be refined) A reputation management system may include mechanisms for: collecting data about entities (generating data inputs or integrating external data); computing reputations; making sure the system is fair (e.g. provide bootstrapping mechanisms for new entities); performing actions based on reputations (e.g. trust computations, automatic decisions); revoking reputations, allowing entities legitimate control over their reputation and allowing entities to challenge their reputations (governance); making sure the system is not abused (security), making sure privacy of entities is respected (i.e. that the association entity - reputation is only disclosed to authorized parties)

RP

Relying Party. (See Reputee in the context of OpenID).

OP

OpenID Provider. The reputation Compute-engine in the OpenID model.

Reputer

Reputee

RSP

Reputation Service Provider.

VoIP

Voice over Internet Protocol.

UC

Unified Communications, a term denoting all forms of call and multimedia/cross-media message-management functions controlled by an individual user for both business and social purposes[2]

2 Normative References

[RFC2119] S. Bradner, Key words for use in RFCs to Indicate Requirement Levels, , IETF RFC 2119, March 1997.

3 Non-Normative References

[OpenIDW] OpenID Wikipedia Page

[OpenID] OpenID Community Website

[Dellarocas] Dellarocas, C., 2005, "Reputation Mechanisms".

[Wilson] Wilson, R., 1985, Reputations in Games and Markets. A. Roth, ed. Game-Theoretic Models of Bargaining, Cambridge University Press, Cambridge, UK, 27-62.

[Sakimura] Sakimura, N., 2008 "What is Reputation?"

[veracite] Veracite Research Project (IBM)

[enisasec] Reputation-based Systems: A security analysis: ENISA position paper, E Carrara, Giles Hogben, October 2007

Overview

The use of reputation systems has been proposed for various applications such as:

• Validating the trustworthiness of sellers and buyers in online auctions (ecommerce websites have proved can have large influence on sellers)

• Detecting free riders in peer to peer networks

• Ensuring the authenticity of signature keys in a web of trust.

• Smarter searching of web sites, blogs, events, products, companies and other individuals.

Reputation in these examples refers to the opinions about an entity, from others. Reputation is one of the factors upon which trust can be based through the use of verifiable claims. Reputation changes with time and is used within a context. Trust and reputation are related to a context.

There are various methods for generating user's reputation data or trustworthiness. Some methods are based on user's feedback through appropriate feedback channels. Other methods include having viewers participate in the reputation-building process through the user's profile at specific sites and communities. Each method has its limitations in terms of its susceptibility to bad actors, manipulation of data for specific purposes, and spammers.

ORMS Reference Model

The following figure represents a generalized reputation model.

[pic]

Figure 1 Generalized reputation model [to be replaced with a regular diagram - ed]

Primary components of the reference model are

1. Input sources: Reputation Input data collectors (RDC).

2. Reputation Data (RD): Portable reputation data generated by all input sources into a reputation computation engine (RCE or reputation calculator).

3. Reputation Context (RC). This allows filtering and qualifying the right choice of algorithms to use and pre-process.

4. Reputation Score (RS): the outcome of the reputation evaluation of an entity (to be portable).

5. Reputation Consumer (Reputee): consumer of reputation score to use as yardstick for computing the degree of trust for the entity it serves.

Thus, the primary objective and challenge is to make the reputation input data and reputation formats interoperable (portable) across vendor-boundaries and domains of Reputation.

[pic]

Figure 2 Reputation Reference Model [to be edited as we go along - ed]

Use-cases

1 OpenID in Trusted Exchange

1 Actors

The identified actors in an OpenID reputation framework are:

1. OpenID Provider

2. OpenID Relying Party (Reputee)

3. Reputation Service (Reputer)

2 Description

Trusted Exchange is a secure protocol for data exchange between a OpenID provider (OP) and a Relying party (RP). OP provides RP access to user data based on RP's reputation.

1 Basic Flows

[TBD: figure]

1 Pre-conditions

2 Post-conditions

3 Input

The following are the general inputs to the OpenID trusted exchange.

1. Numeric count of successful transaction

2. Numeric count of claims

4 Output

Score value accumulated to evaluate RP's trustworthiness.

2 IdP (Identity Provider) Reputation Service

The identity provider and the use-case is quite analogous to the OpenID provider role in the previous use-case.

1 Actors

1. Identity Provider

2. User (service clients relying on the IdP)

3. Identity Provider Reputation Service (Reputer – providing the trustworthiness of the chosen IdP)

2 Description

The generic use case applies to all browser-redirect-based Web single-sign-on systems (e.g., OpenID, SAML Web Profile, etc.) This use case has received particular attention in the OpenID community as an alternative (or a supplement) to OpenID Relying Parties (RPs) having to maintain their own OpenID Provider whitelists/blacklists.

1 Basic Flows

[TBD: figure]

1 Pre-conditions

2 Post-conditions

3 Input

Options (not mutually exclusive):

➢ Vote from authenticated IdP users.

➢ Vote from registered IdPs.

➢ Vote from registered third parties.

4 Output

Score value accumulated to evaluate IdP’s trustworthiness.

3 Content Filtering

This use-case aims to describe reputation as Trust meta-data for content-filtering. It references a (as yet-unpublicized) Veracite research project

1 Actors

1. Users of web content (producers, evaluators, consumers, etc.)

2. Veracite server(s)

2 Description

This scenario is based in the Veracite research project from IBM. A Veracite server provides a service for binding actor information to web content (where actor can be a person - author, evaluator, etc. or an automated system), together with assertions from that actor about the specific content (actor information and web content are distributed, the server only provides the binding). This "trust metadata" is used by content consumers to filter the content according to her trust preferences. Actors in a Veracite system can have reputations associated, which becomes another parameter for content filtering.

1 Basic Flows

[TBD: figure]

1 Pre-conditions

2 Post-conditions

3 Input

1. Content-providing actor’s assertions about content.

2. (veracite) service’s binding (vouching) of the content to the content-providing-actor’s identity.

4 Output

The system does not produce reputation scores, it relies on portable reputations provided by third parties (the requirement is that the reputation information can be used for filtering and that the context to which it applies be well specified).

4 Second Life Avatars

1 Actors

1. SecondLife (SL) reputation service

2. Avatars

2 Description

Enabling portability of avatar reputations from one SL node to another (as the grid diversifies, and specialized SL nodes emerge, this will require "translation"of avatar reputations between nodes)

1 Basic Flows

[TBD: figure]

1 Pre-conditions

2 Post-conditions

3 Input

Externalized description of the metadata that defines an avatar, peer ratings for an avatar in a given node, historical data for an avatar in a given node

4 Output

TBD

5 Nodes in Second Life Grid

1 Actors

1. SecondLife reputation service

2. SL nodes

3. Avatars

2 Description

An SL grid is emerging where different nodes can be controlled by different entities: SL servers are no longer under the sole control of Linden Labs, anybody is able to put up a SL node and integrate it into the SL grid. This opens up the possibility for new business scenarios (e.g "business oriented" SL nodes) but also for malicious SL nodes; having a reputation associated to a node would help.

1 Basic Flows

[TBD: figure]

1 Pre-conditions

2 Post-conditions

3 Input

Per-node ratings submitted to a reputation service.

4 Output

[TBD]

6 Social-network derived Peer reputation

1 Actors

Members of a social network (reputes and reputers).

2 Description

Members of a social network who have a relationship with member A are randomly sampled and asked to vouch for or rate member A with respect to a specific criterion. The identities of the members vouching/rating are optionally kept anonymous, but in any case they are known to be members in good standing.

1 Basic Flows

[TBD: figure]

1 Pre-conditions

2 Post-conditions

3 Input

Scores stated by the vouching members AND frequency and recency of activity, and interactions of the vouching members.

4 Output

Personal score value.

7 Digital Signature (signing key) reputation

1 Actors

Key holders

2 Description

Signers in a web of trust sign keys to express trust in the assertion that the key belongs to the holder’s name (subjectname and/or subjaltname) contained in the digital certificate. The more people sign, the greater the trust in that assertion. Note the only assertion subject to reputation is that the key belongs to the named individual - nothing about the trustworthiness of this individual.

1 Basic Flows

[TBD: figure]

1 Pre-conditions

2 Post-conditions

3 Input

Number of signing keys and their reputation; CRL for signing keys.

4 Output

Key trust value

8 Peer Reputation in P2P Networks

1 Actors

Nodes in a peer-to-peer network.

2 Description

Internet service providers may use the following fairness criterion based on reputation for regulating bandwidth allocation according to observed usage behavior best-practices: Nodes in a P2P network gain download bandwidth according to their upload behavior. Essentially a bandwidth economy is maintained.

1 Basic Flows

[TBD: figure]

1 Pre-conditions

2 Post-conditions

3 Input

Average upload bandwidth for a node, number of files uploaded and similar upload metrics.

4 Output

Node download bandwidth.

9 Seller Reputation

1 Actors

Sellers and buyers in an e-commerce system.

2 Description

Buyers vote on the trustworthiness/quality of sellers.

1 Basic Flows

[TBD: figure]

1 Pre-conditions

2 Post-conditions

3 Input

Buyer rates the sellers (potentially also prices of items bought, buyer-reputations of voters).

4 Output

Seller reputation as a percentage.

10 Reputee Influence: Social & Professional Networks

This class of use-cases deals with reputee-influenced criteria in social and professional networks.

1 Actors

3. Customers or users relating to a professional and/or company (reputers)

4. Professional and/or company being evaluated (reputee)

5. Reputation Service Provider (RSP)

2 Description

A specific aspect is that reputers, reputees and the reputation service provider may determine criteria to be evaluated. Both reputers and reputees may apply their respective weightings allowing the reputation service provider to calculate overall ratings and rankings of professionals and/or companies within a specific business segment

1 Basic Flows

[TBD: figure]

1 Pre-conditions

2 Post-conditions

3 Input

Scores on specific criteria by reputers processed by reputation service provider to facilitate relevancy and avoid fraud.

4 Output

Reputer as well as reputee biased consolidated score.

11 Adaptive Trust: Enterprise unified communications (UC)

1 Actors

1. Reputees: email/IM/VoIP/... UC clients

2. Reputers: Enterprise UC services.

3. Reputation Service Providers: Enterprise Policy Framework, through agents (gateways - XML, VoIP) or enterprise UC servers.

2 Description

Intrusion and SPAM detection agents monitor authorized behavior and score down clients (reputees) based on patterns of Policy-violations. They score back up to default level when behavior is in line with policy. Reputers (UC Services) deliver based the level of current trust-score. The services themselves (policy enforcement points) may not be directly involved in interpreting scores. The repute-access privileges may be modulated by Policy Decision Points.

1 Basic Flows

[TBD: figure]

1 Pre-conditions

2 Post-conditions

3 Input

The enterprise policy against which to measure behavior: patterns of policy-violation or compliance.

4 Output

Trust-levels (authorizations) mapped to a numeric scale or role.

12 Federated Trust in UC

This is a more complex variant of the adaptive trust UC use-case. There exists a two-tier reputation system. Two RSPs are peered to exchange reputation-leading events. The RSPs’ trust of each other may also be impacted.

1 Actors

1. Reputee: Remote UC client (e.g., Yahoo, Google client) - (C)

2. Reputer: called/emailed/IMed/... Enterprise-destination UC service - (D).

3. RSP: Enterprise Policy Framework, through agents (gateways - XML, VoIP) or enterprise UC servers.

2 Description

1. D detects a pattern of abuse(SPAM/SPIT/SPIM) and reports to peer (e.g., DKIM/SIP) server S hosting (C).

2. S may gather similar inputs on C and be its RSP.

3. D may provide a trust-score and be (one of C's) RSPs.

4. S may combine scores/reports and be two-tier RSP. A possible hierarchical RSP scenario is hidden in (3).

5. This may result in RSP action by S similar to the generic Adaptive Trust UC use-case.

1 Basic Flows

[TBD: figure]

1 Pre-conditions

2 Post-conditions

3 Input

S scores trust value of ; optionally D scores in (3). S reacts to the score. D may act independent of S's scoring (may rely on its internal trust-score or input).

4 Output

The following are possible outputs – besides trust-score of reputation.

➢ Step [2] leads to a Report / Alert of trust-related event.

➢ Steps [3] and [4] provide data or trust-score. There's a contractual or baselined trust-level between every S & D (Federation).

13 Peer-peer reputation (between actors)

1 Actors

Reputees: both participants in an electronic messaging exchange between people

Reputer: messaging client or server

2 Description

Two people communicate electronically (e.g via email or IM).

1 Basic Flows

[TBD: figure]

1 Pre-conditions

2 Post-conditions

3 Input

Inputs to the reputation evaluation engine will be the communication content itself - text/content analysis of the message's basic intent (e.g. request, offer, commitment/promise, question, answer, notice) as well as latency, frequency of interaction.

4 Output

Relative peer reputation and/or social capital each party has accumulated in the relationship

Security and Privacy considerations

As in any open system, there are considerations of threat and vulnerabilities to be analyzed in the Reputation Management System – both because and notwithstanding the Reputation Management System itself being a service to be built on a web of trust.

1 Threat taxonomy

Specifically, [enisasec] refers to a slew of threats and many of these are captured here for reference to possible relevance to some or all of the discussed use-cases.

1 Whitewashing Attack

The adversary resets a poor reputation by rejoining the system with a new identity. Systems that allow for easy change of identity and easy use of new pseudonyms are vulnerable to this attack.

2 Sybil Attack

The adversary creates multiple identities (sybils) and exploits them in order to manipulate a reputation score.

3 Impersonation and Reputation Theft

One entity acquires the identity of another entity (masquerades) and consequently steals her reputation.

4 Bootstrap issues and related threats

The initial reputation value given to a newcomer may lay it open to threats such as Sybil and Whitewashing attacks.

5 Extortion

Coordinated campaigns aimed at blackmail by damaging reputation for malicious motives.

6 Denial of Reputation

Attack designed to damage an entity’s reputation (e.g. in combination with a sybil attack or impersonation) and create an opportunity for blackmail in order to have the reputation cleaned.

7 Ballot-stuffing and bad-mouthing

Reporting of a false reputation score; the attackers (distinct or sybils) collude to give positive/negative feedback, to increase or lower a reputation.

8 Collusion

Multiple users conspire (collude) to influence a given reputation.

9 Repudiation - of data, transaction

an entity can deny that a transaction happened, or the existence of data for which he was responsible.

10 Dishonest Reputer

The voter is not trustworthy in his/her scoring.

11 Privacy threats for voters and reputation owners

Reputers and reputation systems owners may be unwilling or unable to provide explicitly honest inputs for fear of reprisal or backlash from (an apparently powerful) reputee.

Anonymity offers a safe haven for accurate voting under these circumstances. For example, anonymity improves the accuracy of votes.

12 Social threats

Discriminatory behavior is possible when, for example, in a second-order reputation system, an entity can choose to co-operate only with peers who have a high reputation, so that their recommendations weigh more heavily. Other possible social threats include the risk of herd behaviour and the penalisation of innovative, controversial opinions, and vocal minority effect.

13 Threats to the lower network layers

The reputation system can be attacked by targeting the underlying infrastructure; for example, the reputation information can be manipulated/replayed/disclosed both when stored and when transported, or may be made unavailable by a denial of service attack.

14 Trust topology threats

An attack targets certain links to have maximum effect, for example those entities with the highest reputation.

15 Threats to ratings

There is a whole range of threats to reputation ratings which exploit features of metrics used by the system to calculate the aggregate reputation rating from the single scores.

2 Countermeasures

3 Privacy considerations

1 Privacy of Reputee

Reputation data – input and score – SHOULD NOT include information (meta-data) that relates to the Personal Information (PI) and Personally Identifiable Information (PII).

2 Privacy of Reputer

Portable Reputation Format should provide for and preserve anonymity, where desired or required, of the reputation provider from the reputation consumer and the reputee. Here is all the more an implication that while the reputation calculator needs authentic information about the identity of the reputation input provider, audit and compliance requirements will still need to record the identity of input source.

3 Privacy protection between Reputers

Given the potential for reputers being influenced, in specific instances, by other reputers is also detrimental to the integrity and accuracy of the reputation input.

A. Acknowledgements

The following individuals have participated in the creation of this specification and are gratefully acknowledged:

Participants:

B. Non-Normative Text

C. Revision History

|Revision |Date |Editor |Changes Made |

|0.1 |17 September 2008 |Mahalingam Mani |Initial version |

|0.2 |30 september, 2008 |Mahalingam Mani |Updates to use-case sections, introduction to reference model |

| | | |based on initial TC discussions. Also introducing security and |

| | | |privacy considerations section. |

|0.21 |29 October, 2008 |Mahalingam Mani |Expanded on the Reference model, security considerations. Refined|

| | | |use-cases text. |

| | | | |

-----------------------

[1] An example is MUD: Multi-User Domain.

[2] This definition, from International Engineering Consortium is the most generic to many minor industry variants.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download