Loading...
Research Project
ENHANCING TRUSTABILITY OF MMOGS ENVIRONMENTS
Funder
Authors
Publications
Enhancing trustability in MMOGs environments
Publication . Cardoso, Rui João Morais de Almeida Costa; Gomes, Abel João Padrão; Freire, Mário Marques
Massively Multiplayer Online Games (MMOGs; e.g., World of Warcraft), virtual worlds
(VW; e.g., Second Life), social networks (e.g., Facebook) strongly demand for more
autonomic, security, and trust mechanisms in a way similar to humans do in the real
life world. As known, this is a difficult matter because trusting in humans and organizations
depends on the perception and experience of each individual, which is difficult to
quantify or measure. In fact, these societal environments lack trust mechanisms similar
to those involved in humans-to-human interactions. Besides, interactions mediated
by compute devices are constantly evolving, requiring trust mechanisms that keep the
pace with the developments and assess risk situations.
In VW/MMOGs, it is widely recognized that users develop trust relationships from their
in-world interactions with others. However, these trust relationships end up not being
represented in the data structures (or databases) of such virtual worlds, though they
sometimes appear associated to reputation and recommendation systems. In addition,
as far as we know, the user is not provided with a personal trust tool to sustain his/her
decision making while he/she interacts with other users in the virtual or game world.
In order to solve this problem, as well as those mentioned above, we propose herein a
formal representation of these personal trust relationships, which are based on avataravatar
interactions. The leading idea is to provide each avatar-impersonated player
with a personal trust tool that follows a distributed trust model, i.e., the trust data is
distributed over the societal network of a given VW/MMOG.
Representing, manipulating, and inferring trust from the user/player point of view certainly
is a grand challenge. When someone meets an unknown individual, the question
is “Can I trust him/her or not?”. It is clear that this requires the user to have access to
a representation of trust about others, but, unless we are using an open source VW/MMOG,
it is difficult —not to say unfeasible— to get access to such data. Even, in an open
source system, a number of users may refuse to pass information about its friends, acquaintances,
or others. Putting together its own data and gathered data obtained from
others, the avatar-impersonated player should be able to come across a trust result
about its current trustee. For the trust assessment method used in this thesis, we use
subjective logic operators and graph search algorithms to undertake such trust inference
about the trustee. The proposed trust inference system has been validated using
a number of OpenSimulator (opensimulator.org) scenarios, which showed an accuracy
increase in evaluating trustability of avatars.
Summing up, our proposal aims thus to introduce a trust theory for virtual worlds, its
trust assessment metrics (e.g., subjective logic) and trust discovery methods (e.g.,
graph search methods), on an individual basis, rather than based on usual centralized
reputation systems. In particular, and unlike other trust discovery methods, our methods
run at interactive rates.
A User Trust System for Online Games: Part I
Publication . Cardoso, Rui Costa; Freire, Mario; Gomes, Abel
In virtual worlds (including computer games), users develop trust relationships from their in-world interactions with others. However, these trust relationships end up not being represented in the data structures (or databases) of such virtual worlds, though they sometimes appear associated with reputation and recommendation systems. In addition, as far as we know, the user is not provided with a personal trust tool to sustain his/her decision-making while he/she interacts with other users in the virtual or game world. In order to come up with a computational formal representation of these personal trust relationships, we need to succeed in converting in-world interactions into reliable sources of trust-related data. In this paper, we develop the required formalisms to gather and represent in-world interactions-which are based on the activity theory-as well as a method to convert in-world interactions into trust networks. In the companion paper, we use these trust networks to produce a computational trust decision based on subjective logic. This solution aims at supporting in-world user (or avatar) decisions about others in the game world.
A User Trust System for Online Games: Part II
Publication . Cardoso, Rui Costa; Freire, Mario; Gomes, Abel
Representing, manipulating, and inferring trust from the user point of view certainly is a grand challenge in virtual worlds, including online games. When someone meets an unknown individual, the question is “Can I trust him/her or not?” This requires the user to have access to a representation of trust about others, as well as a set of operators to undertake inference about the trustability of other users/players. In this paper, we employ a trust representation generated from in-world data in order to feed individual trust decisions. To achieve that purpose, we assume that such a representation of trust already exists; in fact, it was proposed in another paper of ours. Thus, the focus here is on the trust mechanisms required to infer trustability of other users/players. More specifically, we use an individual trust representation deployed as a trust network as base to the inference mechanism that employs two subjective logic operators (consensus and discount) to automatically derive trust decisions. The proposed trust inference system has been validated through OpenSimulator scenarios, which has led to a 5% increase on trustability of avatars in relation to the reference scenario (without trust).
Organizational Units
Description
Keywords
Contributors
Funders
Funding agency
Fundação para a Ciência e a Tecnologia
Funding programme
Funding Award Number
SFRH/BD/79567/2011