Neuromarketing Applications of Neuroprosthetic Devices: An Assessment of Neural Implants’ Capacities for Gathering Data and Influencing Behavior

In Business Models for Strategic Innovation: Cross-Functional Perspectives, edited by S.M. Riad Shams, Demetris Vrontis, Yaakov Weber, and Evangelos Tsoukatos, pp. 11-24 • London: Routledge, 2018

ABSTRACT: Neuromarketing utilizes innovative technologies to accomplish two key tasks: 1) gathering data about the ways in which human beings’ cognitive processes can be influenced by particular stimuli; and 2) creating and delivering stimuli to influence the behavior of potential consumers. In this text, we argue that rather than utilizing specialized systems such as EEG and fMRI equipment (for data gathering) and web-based microtargeting platforms (for influencing behavior), it will increasingly be possible for neuromarketing practitioners to perform both tasks by accessing and exploiting neuroprosthetic devices already possessed by members of society.

We first present an overview of neuromarketing and neuroprosthetic devices. A two-dimensional conceptual framework is then developed that can be used to identify the technological and biocybernetic capacities of different types of neuroprosthetic devices for performing neuromarketing-related functions. One axis of the framework delineates the main functional types of sensory, motor, and cognitive neural implants; the other describes the key neuromarketing activities of gathering data on consumers’ cognitive activity and influencing their behavior. This framework is then utilized to identify potential neuromarketing applications for a diverse range of existing and anticipated neuroprosthetic technologies.

It is hoped that this analysis of the capacities of neuroprosthetic devices to be utilized in neuromarketing-related roles can: 1) lay a foundation for subsequent analyses of whether such potential applications are desirable or inappropriate from ethical, legal, and operational perspectives; and 2) help information security professionals develop effective mechanisms for protecting neuroprosthetic devices against inappropriate or undesired neuromarketing techniques while safeguarding legitimate neuromarketing activities.

Read more

Sapient Circuits and Digitalized Flesh: The Organization as Locus of Technological Posthumanization

ISBN 978-1-944373-21-4 • Second edition • Defragmenter Media, 2018 • 238 pages

Key organizational decisions made by sapient AIs. The pressure to undergo neuroprosthetic augmentation in order to compete with genetically enhanced coworkers. A corporate headquarters that exists only in cyberspace as a persistent virtual world. A project team whose members interact socially as online avatars without knowing or caring whether fellow team members are human beings or robots. Futurologists’ visions of the dawning age of ‘posthumanized’ organizations range from the disquieting to the exhilarating. Which of these visions are compatible with our best current understanding of the capacities and the limits of human intelligence, physiology, and sociality? And what can posthumanist thought reveal about the forces of technologization that are transforming how we collaborate with one another – and with ever more sophisticated artificial agents and systems – to achieve shared goals?

This book develops new insights into the evolving nature of intelligent agency and collaboration by applying the post-anthropocentric and post-dualistic methodologies of posthumanism to the fields of organizational theory and management. Building on a comprehensive typology of posthumanism, an emerging ‘organizational posthumanism’ is described which makes sense of the dynamics of technological posthumanization that are reshaping the members, personnel structures, information systems, processes, physical and virtual spaces, and external environments available to organizations. Conceptual frameworks and analytical tools are formulated for use in diagnosing and guiding the ongoing convergence in the capacities of human and artificial actors that is being spurred by novel technologies relating to human augmentation, synthetic agency, and digital-physical ecosystems. As the first systematic investigation of these topics, this text will be of interest to scholars and students of posthumanism and management and to management practitioners who must grapple on a daily basis with the forces of technologization that are increasingly powerful drivers of organizational change.

Read more

A Phenomenological Analysis of the Virtual World as Aesthetic Object: Echo, Deepening, or Dissolution of the Lifeworld?

A Phenomenological Analysis of the Virtual World

17th Annual Conference of the Polish Phenomenological Association: History, Body, and Life-World – On Patočka and Beyond • Institute of Philosophy and Sociology, Polish Academy of Sciences, Warsaw • December 15, 2017

ABSTRACT: In this work we build on the ontological and aesthetic frameworks formulated by Roman Ingarden to develop a phenomenological analysis of the virtual world as aesthetic object. First, ‘virtual reality technology’ is distinguished from ‘virtual environments’ and ‘virtual worlds.’ The types of immersive, interactive virtual worlds accessed through contemporary VR technologies are further distinguished from the types of ‘virtual worlds’ accessed, e.g., by reading a novel or watching a film. Essential and optional elements of virtual worlds are identified, with special attention given to the (software-enforced) ‘laws of nature’ governing the structure and dynamics of elements in a world, the pseudo-natural origins of apparently ‘natural’ elements like wild animals and geographic formations, and the unique positions of the world’s designer(s) and human visitor(s). The potential ‘incompleteness’ of virtual architectural structures and inability to determine whether one’s social interactions are with human or artificial agents is analyzed in light of Ingarden’s interpretation of Husserl’s phenomenological model of intentionality and the perception of objects. It is shown that a virtual building, e.g., does not display all the features of a real-world building but instead possesses some characteristics found in real-world paintings.

Drawing on Ingarden’s framework, the (physical) ontic basis of a virtual world is distinguished from the (purely intentional) virtual world as a work of art that is grasped through perception and the related aesthetic and cultural objects that may be constituted by a visitor who undergoes the right sort of conscious experience. The stratification of a virtual world as a work of art is also investigated. Building on Ingarden’s critique of Husserl’s concept of the ‘lifeworld’ as the natural world that is simultaneously (a) stripped of modern scientific theory and (b) the world that we live in and manipulate, it is suggested that VR-facilitated virtual worlds (like other highly technologized forms of art) undermine the factual possibility for such a lifeworld to exist. In response, though, Patočka’s notion (influenced by Ingarden) of fictional literary worlds as ‘echoes’ of the lifeworld is noted; we thus close by raising the question of whether certain virtual worlds might potentially be employed to help restore the possibility of (perhaps temporarily) establishing a Husserlian lifeworld.

Read more

An Axiology of Information Security for Futuristic Neuroprostheses: Upholding Human Values in the Context of Technological Posthumanization

Frontiers in Neuroscience 11, 605 (2017); MNiSW 2016 List A: 30 points; 2017 Impact Factor: 3.566

ABSTRACT: Previous works exploring the challenges of ensuring information security for neuroprosthetic devices and their users have typically built on the traditional InfoSec concept of the “CIA Triad” of confidentiality, integrity, and availability. However, we argue that the CIA Triad provides an increasingly inadequate foundation for envisioning information security for neuroprostheses, insofar as it presumes that (1) any computational systems to be secured are merely instruments for expressing their human users’ agency, and (2) computing devices are conceptually and practically separable from their users. Drawing on contemporary philosophy of technology and philosophical and critical posthumanist analysis, we contend that futuristic neuroprostheses could conceivably violate these basic InfoSec presumptions, insofar as (1) they may alter or supplant their users’ biological agency rather than simply supporting it, and (2) they may structurally and functionally fuse with their users to create qualitatively novel “posthumanized” human-machine systems that cannot be secured as though they were conventional computing devices. Simultaneously, it is noted that many of the goals that have been proposed for future neuroprostheses by InfoSec researchers (e.g., relating to aesthetics, human dignity, authenticity, free will, and cultural sensitivity) fall outside the scope of InfoSec as it has historically been understood and touch on a wide range of ethical, aesthetic, physical, metaphysical, psychological, economic, and social values. We suggest that the field of axiology can provide useful frameworks for more effectively identifying, analyzing, and prioritizing such diverse types of values and goods that can (and should) be pursued through InfoSec practices for futuristic neuroprostheses.

Read more

The Diffuse Intelligent Other: An Ontology of Nonlocalizable Robots as Moral and Legal Actors

In Social Robots: Boundaries, Potential, Challenges, edited by Marco Nørskov, pp. 177-98 • Farnham: Ashgate, 2016

ABSTRACT: Much thought has been given to the question of who bears moral and legal responsibility for actions performed by robots. Some argue that responsibility could be attributed to a robot if it possessed human-like autonomy and metavolitionality, and that while such capacities can potentially be possessed by a robot with a single spatially compact body, they cannot be possessed by a spatially disjunct, decentralized collective such as a robotic swarm or network. However, advances in ubiquitous robotics and distributed computing open the door to a new form of robotic entity that possesses a unitary intelligence, despite the fact that its cognitive processes are not confined within a single spatially compact, persistent, identifiable body. Such a “nonlocalizable” robot may possess a body whose myriad components interact with one another at a distance and which is continuously transforming as components join and leave the body. Here we develop an ontology for classifying such robots on the basis of their autonomy, volitionality, and localizability. Using this ontology, we explore the extent to which nonlocalizable robots—including those possessing cognitive abilities that match or exceed those of human beings—can be considered moral and legal actors that are responsible for their own actions.

Read more

Managing the Ethical Dimensions of Brain-Computer Interfaces in eHealth: An SDLC-based Approach

In 9th Annual EMAB Conference: Innovation, Entrepreneurship and Digital Ecosystems (EUROMED 2016) Book of Proceedings, edited by Demetris Vrontis, Yaakov Weber, and Evangelos Tsoukatos, pp. 876-90 • Engomi: EuroMed Press, 2016

ABSTRACT: A growing range of brain-computer interface (BCI) technologies is being employed for purposes of therapy and human augmentation. While much thought has been given to the ethical implications of such technologies at the ‘macro’ level of social policy and ‘micro’ level of individual users, little attention has been given to the unique ethical issues that arise during the process of incorporating BCIs into eHealth ecosystems. In this text a conceptual framework is developed that enables the operators of eHealth ecosystems to manage the ethical components of such processes in a more comprehensive and systematic way than has previously been possible. The framework’s first axis defines five ethical dimensions that must be successfully addressed by eHealth ecosystems: 1) beneficence; 2) consent; 3) privacy; 4) equity; and 5) liability. The second axis describes five stages of the systems development life cycle (SDLC) process whereby new technology is incorporated into an eHealth ecosystem: 1) analysis and planning; 2) design, development, and acquisition; 3) integration and activation; 4) operation and maintenance; and 5) disposal. Known ethical issues relating to the deployment of BCIs are mapped onto this matrix in order to demonstrate how it can be employed by the managers of eHealth ecosystems as a tool for fulfilling ethical requirements established by regulatory standards or stakeholders’ expectations. Beyond its immediate application in the case of BCIs, we suggest that this framework may also be utilized beneficially when incorporating other innovative forms of information and communications technology (ICT) into eHealth ecosystems.

Read more

Information Security Concerns as a Catalyst for the Development of Implantable Cognitive Neuroprostheses

In 9th Annual EMAB Conference: Innovation, Entrepreneurship and Digital Ecosystems (EUROMED 2016) Book of Proceedings, edited by Demetris Vrontis, Yaakov Weber, and Evangelos Tsoukatos, pp. 891-904 • Engomi: EuroMed Press, 2016

ABSTRACT: Standards like the ISO 27000 series, IEC/TR 80001, NIST SP 1800, and FDA guidance on medical device cybersecurity define the responsibilities that manufacturers and operators bear for ensuring the information security of implantable medical devices. In the case of implantable cognitive neuroprostheses (ICNs) that are integrated with the neural circuitry of their human hosts, there is a widespread presumption that InfoSec concerns serve only as limiting factors that can complicate, impede, or preclude the development and deployment of such devices. However, we argue that when appropriately conceptualized, InfoSec concerns may also serve as drivers that can spur the creation and adoption of such technologies. A framework is formulated that describes seven types of actors whose participation is required in order for ICNs to be adopted; namely, their 1) producers, 2) regulators, 3) funders, 4) installers, 5) human hosts, 6) operators, and 7) maintainers. By mapping onto this framework InfoSec issues raised in industry standards and other literature, it is shown that for each actor in the process, concerns about information security can either disincentivize or incentivize the actor to advance the development and deployment of ICNs for purposes of therapy or human enhancement. For example, it is shown that ICNs can strengthen the integrity, availability, and utility of information stored in the memories of persons suffering from certain neurological conditions and may enhance information security for society as a whole by providing new tools for military, law enforcement, medical, or corporate personnel who provide critical InfoSec services.

Read more

From Stand Alone Complexes to Memetic Warfare: Cultural Cybernetics and the Engineering of Posthuman Popular Culture

50 Shades of Popular Culture International Conference • Facta Ficta Research Centre, Kraków • February 19, 2016

ABSTRACT: Here we argue that five emerging social and technological trends are creating new possibilities for the instrumentalization (or even “weaponization”) of popular culture for commercial, ideological, political, or military ends and for the development of a posthuman popular culture that is no longer solely produced by or for “humanity” as presently understood. These five trends are the: 1) decentralization of the sources of popular culture, as reflected in the ability of ordinary users to create and upload content that “goes viral” within popular culture, as well as the use of “astroturfing” and paid “troll armies” by corporate or state actors to create the appearance of broad-based grassroots support for particular products, services, actions, or ideologies; 2) centralization of the mechanisms for accessing popular culture, as seen in the role of instruments like Google’s search engine, YouTube, Facebook, Instagram, and Wikipedia in concentrating the distribution channels for cultural products, as well as efforts by state actors to censor social media content perceived as threatening or disruptive; 3) personalization of popular culture, as manifested in the growth of cultural products like computer games that dynamically reconfigure themselves in response to a player’s behavior, thereby creating a different product for each individual that is adapted to a user’s unique experiences, desires, and psychological characteristics; 4) automatization of the creation of products of popular culture, as seen in the automated high-speed generation of webpages, artwork, music, memes, and computer game content by AI systems that could potentially allow venues of popular culture (such as the Internet) to be flooded with content designed to influence a social group in particular ways; and 5) virtualization of the technological systems and mechanisms for creating, transmitting, and experiencing the products of popular culture, as witnessed in the development of all-purpose nodes (such as smartphones) that are capable of handling a full range of cultural products in the form of still images, video, audio, text, and interactive experiences, and the growing digitalization of cultural products that allows them to be more easily manipulated and injected into the popular culture of other states or social groups, bypassing physical and political barriers.

While these trends are expected to yield a broad range of positive and negative impacts, we focus on a particular subset of these impacts. Namely, we argue that the convergence of these five trends opens the door for the creation of popular culture that: 1) does not exist in any permanent, tangible physical artifacts but only as a collection of continuously transforming digital data that that is stored on the servers of a few powerful corporate or state actors and is subject to manipulation or degradation as a result of computer viruses, hacking, power outages, or other factors; 2) can be purposefully and effectively engineered using techniques commonly employed within IT management, electronics engineering, marketing, and other disciplines; 3) can become a new kind of weapon and battleground in struggles for military, political, ideological, and commercial superiority on the part of corporate, state, and other actors.

In order to stimulate thinking about ways in which these trends might develop, we conclude by considering two fictional near-future worlds – those depicted in Ghost in the Shell: Stand Alone Complex and Transhuman Space: Toxic Memes – in which the further evolution of these five trends is shown as leading to the neurocybernetically facilitated manipulation of popular culture, “memetic warfare,” and related phenomena. We suggest that these fictional works represent examples of self-reflexive futurology: i.e., elements of contemporary popular culture that attempt to anticipate and explore the ways in which future popular culture could be purposefully engineered, instrumentalized, and even weaponized in the service of a diverse array of ends.

Read more

Cryptocurrency with a Conscience: Using Artificial Intelligence to Develop Money that Advances Human Ethical Values

Annales. Etyka w Życiu Gospodarczym / Annales: Ethics in Economic Life 18, no. 4 (2015), pp. 85-98; MNiSW 2015 List B: 10 points

ABSTRACT: Cryptocurrencies like Bitcoin are offering new avenues for economic empowerment to individuals around the world. However, they also provide a powerful tool that facilitates criminal activities such as human trafficking and illegal weapons sales that cause great harm to individuals and communities. Cryptocurrency advocates have argued that the ethical dimensions of cryptocurrency are not qualitatively new, insofar as money has always been understood as a passive instrument that lacks ethical values and can be used for good or ill purposes. In this paper, we challenge such a presumption that money must be “value-neutral.” Building on advances in artificial intelligence, cryptography, and machine ethics, we argue that it is possible to design artificially intelligent cryptocurrencies that are not ethically neutral but which autonomously regulate their own use in a way that reflects the ethical values of particular human beings – or even entire human societies. We propose a technological framework for such cryptocurrencies and then analyze the legal, ethical, and economic implications of their use. Finally, we suggest that the development of cryptocurrencies possessing ethical as well as monetary value can provide human beings with a new economic means of positively influencing the ethos and values of their societies.

Read more

Utopias and Dystopias as Cybernetic Information Systems: Envisioning the Posthuman Neuropolity

Creatio Fantastica no. 3(50) (2015)

ABSTRACT: While it is possible to understand utopias and dystopias as particular kinds of sociopolitical systems, in this text we argue that utopias and dystopias can also be understood as particular kinds of information systems in which data is received, stored, generated, processed, and transmitted by the minds of human beings that constitute the system’s ‘nodes’ and which are connected according to specific network topologies. We begin by formulating a model of cybernetic information-processing properties that characterize utopias and dystopias. It is then shown that the growing use of neuroprosthetic technologies for human enhancement is expected to radically reshape the ways in which human minds access, manipulate, and share information with one another; for example, such technologies may give rise to posthuman ‘neuropolities’ in which human minds can interact with their environment using new sensorimotor capacities, dwell within shared virtual cyberworlds, and link with one another to form new kinds of social organizations , including hive minds that utilize communal memory and decision-making. Drawing on our model, we argue that the dynamics of such neuropolities will allow (or perhaps even impel) the creation of new kinds of utopias and dystopias that were previously impossible to realize. Finally, we suggest that it is important that humanity begin thoughtfully exploring the ethical, social, and political implications of realizing such technologically enabled societies by studying neuropolities in a place where they have already been ‘pre-engineered’ and provisionally exist: in works of audiovisual science fiction such as films, television series, and role-playing games.

Read more

The Social Robot as ‘Charismatic Leader’: A Phenomenology of Human Submission to Nonhuman Power

In Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014, edited by Johanna Seibt, Raul Hakli, and Marco Nørskov, pp. 329-39 • Frontiers in Artificial Intelligence and Applications 273 • IOS Press, 2014

ABSTRACT: Much has been written about the possibility of human trust in robots. In this article we consider a more specific relationship: that of a human follower’s obedience to a social robot who leads through the exercise of referent power and what Weber described as ‘charismatic authority.’ By studying robotic design efforts and literary depictions of robots, we suggest that human beings are striving to create charismatic robot leaders that will either (1) inspire us through their display of superior morality; (2) enthrall us through their possession of superhuman knowledge; or (3) seduce us with their romantic allure. Rejecting a contractarian-individualist approach which presumes that human beings will be able to consciously ‘choose’ particular robot leaders, we build on the phenomenological-social approach to trust in robots to argue that charismatic robot leaders will emerge naturally from our world’s social fabric, without any rational decision on our part. Finally, we argue that the stability of these leader-follower relations will hinge on a fundamental, unresolved question of robotic intelligence: is it possible for synthetic intelligences to exist that are morally, intellectually, and emotionally sophisticated enough to exercise charismatic authority over human beings—but not so sophisticated that they lose the desire to do so?

Read more