A Phenomenological “Aesthetics of Isolation” as Environmental Aesthetics for an Era of Ubiquitous Art

The Polish Journal of Aesthetics 49 (2018), pp. 11-25; MNiSW 2016 List B: 12 points

ABSTRACT: Here the concept of the human being as a “relatively isolated system” developed in Ingarden’s later phenomenology is adapted into an “aesthetics of isolation” that complements conventional environmental aesthetics. Such an aesthetics of isolation is especially relevant, given the growing “aesthetic overload” brought about by ubiquitous computing and new forms of art and aesthetic experience such as those involving virtual reality, interactive online performance art, and artificial creativity.

Read more

Sapient Circuits and Digitalized Flesh: The Organization as Locus of Technological Posthumanization

ISBN 978-1-944373-21-4 • Second edition • Defragmenter Media, 2018 • 238 pages

Key organizational decisions made by sapient AIs. The pressure to undergo neuroprosthetic augmentation in order to compete with genetically enhanced coworkers. A corporate headquarters that exists only in cyberspace as a persistent virtual world. A project team whose members interact socially as online avatars without knowing or caring whether fellow team members are human beings or robots. Futurologists’ visions of the dawning age of ‘posthumanized’ organizations range from the disquieting to the exhilarating. Which of these visions are compatible with our best current understanding of the capacities and the limits of human intelligence, physiology, and sociality? And what can posthumanist thought reveal about the forces of technologization that are transforming how we collaborate with one another – and with ever more sophisticated artificial agents and systems – to achieve shared goals?

This book develops new insights into the evolving nature of intelligent agency and collaboration by applying the post-anthropocentric and post-dualistic methodologies of posthumanism to the fields of organizational theory and management. Building on a comprehensive typology of posthumanism, an emerging ‘organizational posthumanism’ is described which makes sense of the dynamics of technological posthumanization that are reshaping the members, personnel structures, information systems, processes, physical and virtual spaces, and external environments available to organizations. Conceptual frameworks and analytical tools are formulated for use in diagnosing and guiding the ongoing convergence in the capacities of human and artificial actors that is being spurred by novel technologies relating to human augmentation, synthetic agency, and digital-physical ecosystems. As the first systematic investigation of these topics, this text will be of interest to scholars and students of posthumanism and management and to management practitioners who must grapple on a daily basis with the forces of technologization that are increasingly powerful drivers of organizational change.

Read more

A Phenomenological Analysis of the Virtual World as Aesthetic Object: Echo, Deepening, or Dissolution of the Lifeworld?

A Phenomenological Analysis of the Virtual World

17th Annual Conference of the Polish Phenomenological Association: History, Body, and Life-World – On Patočka and Beyond • Institute of Philosophy and Sociology, Polish Academy of Sciences, Warsaw • December 15, 2017

ABSTRACT: In this work we build on the ontological and aesthetic frameworks formulated by Roman Ingarden to develop a phenomenological analysis of the virtual world as aesthetic object. First, ‘virtual reality technology’ is distinguished from ‘virtual environments’ and ‘virtual worlds.’ The types of immersive, interactive virtual worlds accessed through contemporary VR technologies are further distinguished from the types of ‘virtual worlds’ accessed, e.g., by reading a novel or watching a film. Essential and optional elements of virtual worlds are identified, with special attention given to the (software-enforced) ‘laws of nature’ governing the structure and dynamics of elements in a world, the pseudo-natural origins of apparently ‘natural’ elements like wild animals and geographic formations, and the unique positions of the world’s designer(s) and human visitor(s). The potential ‘incompleteness’ of virtual architectural structures and inability to determine whether one’s social interactions are with human or artificial agents is analyzed in light of Ingarden’s interpretation of Husserl’s phenomenological model of intentionality and the perception of objects. It is shown that a virtual building, e.g., does not display all the features of a real-world building but instead possesses some characteristics found in real-world paintings.

Drawing on Ingarden’s framework, the (physical) ontic basis of a virtual world is distinguished from the (purely intentional) virtual world as a work of art that is grasped through perception and the related aesthetic and cultural objects that may be constituted by a visitor who undergoes the right sort of conscious experience. The stratification of a virtual world as a work of art is also investigated. Building on Ingarden’s critique of Husserl’s concept of the ‘lifeworld’ as the natural world that is simultaneously (a) stripped of modern scientific theory and (b) the world that we live in and manipulate, it is suggested that VR-facilitated virtual worlds (like other highly technologized forms of art) undermine the factual possibility for such a lifeworld to exist. In response, though, Patočka’s notion (influenced by Ingarden) of fictional literary worlds as ‘echoes’ of the lifeworld is noted; we thus close by raising the question of whether certain virtual worlds might potentially be employed to help restore the possibility of (perhaps temporarily) establishing a Husserlian lifeworld.

Read more

Światło ucieleśnione i zaciemniający parametryzm: Analiza fenomenologiczno-estetyczna praktyki architektonicznej w ‘świecie elektronicznym’

Ogólnopolska konferencja naukowa ‘Wszechświat Disneya’ • Instytut Filologii Polskiej Wydziału Filologicznego Uniwersytetu Pedagogicznego and the Facta Ficta Research Centre, Kraków • December 9, 2017

ABSTRACT: W niniejszej prezentacji przedstawione jest zastosowanie podejścia fenomenologicznego w celu przestudiowania dwóch zagadnień podnoszonych przez praktyki architektoniczne obrazowane w filmach Tron (1982) i Tron: Legacy (2010). Po pierwsze, rozważamy użycie światła ucieleśnionego jako składnika budowlanego w obrazowanym w tych filmach ‘świecie elektronicznym’ (lub ‘drugim wszeczświecie’). Ludzki programista i programy komputerowe przedstawieni jako główni architekci świata elektronicznego stosują światło ucieleśnione jako kluczowy element fizyczny budynków, mostów, dróg, pojazdów, ubrań i innych przedmiotów, albo stwarzając, albo nakreślając namacalne kształty wśród ciemności poza tym niezróżnicowanej. Światło wykorzystowane jest n.p. do stworzenia platform, na których mogą stać postacie, oraz pionowych murów, które stoją na przeszkodzie innym fizycznym przedmiotom ‘zderzającym się’ z nimi. Porównujemy ten fenomen z historycznym w świecie rzeczywistym użyciem światła jako elementu architektonicznego o roli ozdobnej, przedstawiającej, dydaktycznej i funkcjonalnej oraz, w szczególności, z użyciem oświetlenia, aby symulować istnienie dużych fizycznych konstrukcji architektonicznych, które w rzeczywistości nie istnieją. Architektoniczne zastosowanie światła ucieleśnionego inspiruje pytania estetyczne i ontologiczne, które można badać przy pomocy podejścia fenomenologicznego.

Po drugie, badamy sposób, w jaki architektura w świecie elektronicznym omawianych filmów przedstawiona jest jako coś współprojektowanego przez istoty ludzkie i sztucznie inteligentne programy komputerowe przeznaczone do tej roli. W omawianych filmach, ludzki programista wybrał ogólne jakości estetyczne, które mają się przejawiać w architekturze świata elektronicznego i powierzył programom AI rolę przełożenia tych celów na konkretne konstrukcje architektoniczne, automatyzując w ten sposób proces budowania świata spełniającego dane parametry estetyczne. Tron: Legacy prezentuje szczegółowe debaty między ludzkim a AI współarchitektem dotyczące zalet wyboru jakości estetycznych takich, jak swoboda, otwartość, piękno, porządek, doskonalność, skuteczność funkcjonalna, regularność, przewidywalność i chaotyczność, jako cele i parametry dla architektury systemu. Film argumentuje za tym, że dla wspieranego przez AI architektonicznego projektowania parametrycznego wybór jakości estetycznych, które same w sobie wydają się pożądane, może jednakże powodować powstanie struktur z nieprzewidzianymi i bardzo niepożądanymi właściwościami. Wspierany przez sztuczną inteligencję proces architektoniczny, do którego już robił aluzję Tron i który wyraźniej poznany został w Tron: Legacy, może być więc interpretowany (chociaż niezamierzenie) przepowiednia i krytyka współczesnych technik projektowania generatywnego i parametrycznego oraz szczególnego ruchu Parametryzmu.

Read more

An Axiology of Information Security for Futuristic Neuroprostheses: Upholding Human Values in the Context of Technological Posthumanization

Frontiers in Neuroscience 11, 605 (2017); MNiSW 2016 List A: 30 points; 2017 Impact Factor: 3.566

ABSTRACT: Previous works exploring the challenges of ensuring information security for neuroprosthetic devices and their users have typically built on the traditional InfoSec concept of the “CIA Triad” of confidentiality, integrity, and availability. However, we argue that the CIA Triad provides an increasingly inadequate foundation for envisioning information security for neuroprostheses, insofar as it presumes that (1) any computational systems to be secured are merely instruments for expressing their human users’ agency, and (2) computing devices are conceptually and practically separable from their users. Drawing on contemporary philosophy of technology and philosophical and critical posthumanist analysis, we contend that futuristic neuroprostheses could conceivably violate these basic InfoSec presumptions, insofar as (1) they may alter or supplant their users’ biological agency rather than simply supporting it, and (2) they may structurally and functionally fuse with their users to create qualitatively novel “posthumanized” human-machine systems that cannot be secured as though they were conventional computing devices. Simultaneously, it is noted that many of the goals that have been proposed for future neuroprostheses by InfoSec researchers (e.g., relating to aesthetics, human dignity, authenticity, free will, and cultural sensitivity) fall outside the scope of InfoSec as it has historically been understood and touch on a wide range of ethical, aesthetic, physical, metaphysical, psychological, economic, and social values. We suggest that the field of axiology can provide useful frameworks for more effectively identifying, analyzing, and prioritizing such diverse types of values and goods that can (and should) be pursued through InfoSec practices for futuristic neuroprostheses.

Read more

The Handbook of Information Security for Advanced Neuroprosthetics

ISBN 978-1-944373-09-2 • Second edition • Synthypnion Academic, 2017 • 324 pages

How does one ensure information security for a computer that is entangled with the structures and processes of a human brain – and for the human mind that is interconnected with such a device? The need to provide information security for neuroprosthetic devices grows more pressing as increasing numbers of people utilize therapeutic technologies such as cochlear implants, retinal prostheses, robotic prosthetic limbs, and deep brain stimulation devices. Moreover, emerging neuroprosthetic technologies for human enhancement are expected to increasingly transform their human users’ sensory, motor, and cognitive capacities in ways that generate new ‘posthumanized’ sociotechnological realities. In this context, it is essential not only to ensure the information security of such neuroprostheses themselves but – more importantly – to ensure the psychological and physical health, autonomy, and personal identity of the human beings whose cognitive processes are inextricably linked with such devices. InfoSec practitioners must not only guard against threats to the confidentiality and integrity of data stored within a neuroprosthetic device’s internal memory; they must also guard against threats to the confidentiality and integrity of thoughts, memories, and desires existing within the mind the of the device’s human host.

This second edition of The Handbook of Information Security for Advanced Neuroprosthetics updates the previous edition’s comprehensive investigation of these issues from both theoretical and practical perspectives. It provides an introduction to the current state of neuroprosthetics and expected future trends in the field, along with an introduction to fundamental principles of information security and an analysis of how they must be re-envisioned to address the unique challenges posed by advanced neuroprosthetics. A two-dimensional cognitional security framework is presented whose security goals are designed to protect a device’s human host in his or her roles as a sapient metavolitional agent, embodied embedded organism, and social and economic actor. Practical consideration is given to information security responsibilities and roles within an organizational context and to the application of preventive, detective, and corrective or compensating security controls to neuroprosthetic devices, their host-device systems, and the larger supersystems in which they operate. Finally, it is shown that while implantable neuroprostheses create new kinds of security vulnerabilities and risks, they may also serve to enhance the information security of some types of human hosts (such as those experiencing certain neurological conditions).

Read more

The Diffuse Intelligent Other: An Ontology of Nonlocalizable Robots as Moral and Legal Actors

In Social Robots: Boundaries, Potential, Challenges, edited by Marco Nørskov, pp. 177-98 • Farnham: Ashgate, 2016

ABSTRACT: Much thought has been given to the question of who bears moral and legal responsibility for actions performed by robots. Some argue that responsibility could be attributed to a robot if it possessed human-like autonomy and metavolitionality, and that while such capacities can potentially be possessed by a robot with a single spatially compact body, they cannot be possessed by a spatially disjunct, decentralized collective such as a robotic swarm or network. However, advances in ubiquitous robotics and distributed computing open the door to a new form of robotic entity that possesses a unitary intelligence, despite the fact that its cognitive processes are not confined within a single spatially compact, persistent, identifiable body. Such a “nonlocalizable” robot may possess a body whose myriad components interact with one another at a distance and which is continuously transforming as components join and leave the body. Here we develop an ontology for classifying such robots on the basis of their autonomy, volitionality, and localizability. Using this ontology, we explore the extent to which nonlocalizable robots—including those possessing cognitive abilities that match or exceed those of human beings—can be considered moral and legal actors that are responsible for their own actions.

Read more

Managing the Ethical Dimensions of Brain-Computer Interfaces in eHealth: An SDLC-based Approach

In 9th Annual EMAB Conference: Innovation, Entrepreneurship and Digital Ecosystems (EUROMED 2016) Book of Proceedings, edited by Demetris Vrontis, Yaakov Weber, and Evangelos Tsoukatos, pp. 876-90 • Engomi: EuroMed Press, 2016

ABSTRACT: A growing range of brain-computer interface (BCI) technologies is being employed for purposes of therapy and human augmentation. While much thought has been given to the ethical implications of such technologies at the ‘macro’ level of social policy and ‘micro’ level of individual users, little attention has been given to the unique ethical issues that arise during the process of incorporating BCIs into eHealth ecosystems. In this text a conceptual framework is developed that enables the operators of eHealth ecosystems to manage the ethical components of such processes in a more comprehensive and systematic way than has previously been possible. The framework’s first axis defines five ethical dimensions that must be successfully addressed by eHealth ecosystems: 1) beneficence; 2) consent; 3) privacy; 4) equity; and 5) liability. The second axis describes five stages of the systems development life cycle (SDLC) process whereby new technology is incorporated into an eHealth ecosystem: 1) analysis and planning; 2) design, development, and acquisition; 3) integration and activation; 4) operation and maintenance; and 5) disposal. Known ethical issues relating to the deployment of BCIs are mapped onto this matrix in order to demonstrate how it can be employed by the managers of eHealth ecosystems as a tool for fulfilling ethical requirements established by regulatory standards or stakeholders’ expectations. Beyond its immediate application in the case of BCIs, we suggest that this framework may also be utilized beneficially when incorporating other innovative forms of information and communications technology (ICT) into eHealth ecosystems.

Read more

Information Security Concerns as a Catalyst for the Development of Implantable Cognitive Neuroprostheses

In 9th Annual EMAB Conference: Innovation, Entrepreneurship and Digital Ecosystems (EUROMED 2016) Book of Proceedings, edited by Demetris Vrontis, Yaakov Weber, and Evangelos Tsoukatos, pp. 891-904 • Engomi: EuroMed Press, 2016

ABSTRACT: Standards like the ISO 27000 series, IEC/TR 80001, NIST SP 1800, and FDA guidance on medical device cybersecurity define the responsibilities that manufacturers and operators bear for ensuring the information security of implantable medical devices. In the case of implantable cognitive neuroprostheses (ICNs) that are integrated with the neural circuitry of their human hosts, there is a widespread presumption that InfoSec concerns serve only as limiting factors that can complicate, impede, or preclude the development and deployment of such devices. However, we argue that when appropriately conceptualized, InfoSec concerns may also serve as drivers that can spur the creation and adoption of such technologies. A framework is formulated that describes seven types of actors whose participation is required in order for ICNs to be adopted; namely, their 1) producers, 2) regulators, 3) funders, 4) installers, 5) human hosts, 6) operators, and 7) maintainers. By mapping onto this framework InfoSec issues raised in industry standards and other literature, it is shown that for each actor in the process, concerns about information security can either disincentivize or incentivize the actor to advance the development and deployment of ICNs for purposes of therapy or human enhancement. For example, it is shown that ICNs can strengthen the integrity, availability, and utility of information stored in the memories of persons suffering from certain neurological conditions and may enhance information security for society as a whole by providing new tools for military, law enforcement, medical, or corporate personnel who provide critical InfoSec services.

Read more

From Stand Alone Complexes to Memetic Warfare: Cultural Cybernetics and the Engineering of Posthuman Popular Culture

50 Shades of Popular Culture International Conference • Facta Ficta Research Centre, Kraków • February 19, 2016

ABSTRACT: Here we argue that five emerging social and technological trends are creating new possibilities for the instrumentalization (or even “weaponization”) of popular culture for commercial, ideological, political, or military ends and for the development of a posthuman popular culture that is no longer solely produced by or for “humanity” as presently understood. These five trends are the: 1) decentralization of the sources of popular culture, as reflected in the ability of ordinary users to create and upload content that “goes viral” within popular culture, as well as the use of “astroturfing” and paid “troll armies” by corporate or state actors to create the appearance of broad-based grassroots support for particular products, services, actions, or ideologies; 2) centralization of the mechanisms for accessing popular culture, as seen in the role of instruments like Google’s search engine, YouTube, Facebook, Instagram, and Wikipedia in concentrating the distribution channels for cultural products, as well as efforts by state actors to censor social media content perceived as threatening or disruptive; 3) personalization of popular culture, as manifested in the growth of cultural products like computer games that dynamically reconfigure themselves in response to a player’s behavior, thereby creating a different product for each individual that is adapted to a user’s unique experiences, desires, and psychological characteristics; 4) automatization of the creation of products of popular culture, as seen in the automated high-speed generation of webpages, artwork, music, memes, and computer game content by AI systems that could potentially allow venues of popular culture (such as the Internet) to be flooded with content designed to influence a social group in particular ways; and 5) virtualization of the technological systems and mechanisms for creating, transmitting, and experiencing the products of popular culture, as witnessed in the development of all-purpose nodes (such as smartphones) that are capable of handling a full range of cultural products in the form of still images, video, audio, text, and interactive experiences, and the growing digitalization of cultural products that allows them to be more easily manipulated and injected into the popular culture of other states or social groups, bypassing physical and political barriers.

While these trends are expected to yield a broad range of positive and negative impacts, we focus on a particular subset of these impacts. Namely, we argue that the convergence of these five trends opens the door for the creation of popular culture that: 1) does not exist in any permanent, tangible physical artifacts but only as a collection of continuously transforming digital data that that is stored on the servers of a few powerful corporate or state actors and is subject to manipulation or degradation as a result of computer viruses, hacking, power outages, or other factors; 2) can be purposefully and effectively engineered using techniques commonly employed within IT management, electronics engineering, marketing, and other disciplines; 3) can become a new kind of weapon and battleground in struggles for military, political, ideological, and commercial superiority on the part of corporate, state, and other actors.

In order to stimulate thinking about ways in which these trends might develop, we conclude by considering two fictional near-future worlds – those depicted in Ghost in the Shell: Stand Alone Complex and Transhuman Space: Toxic Memes – in which the further evolution of these five trends is shown as leading to the neurocybernetically facilitated manipulation of popular culture, “memetic warfare,” and related phenomena. We suggest that these fictional works represent examples of self-reflexive futurology: i.e., elements of contemporary popular culture that attempt to anticipate and explore the ways in which future popular culture could be purposefully engineered, instrumentalized, and even weaponized in the service of a diverse array of ends.

Read more

Cryptocurrency with a Conscience: Using Artificial Intelligence to Develop Money that Advances Human Ethical Values

Annales. Etyka w Życiu Gospodarczym / Annales: Ethics in Economic Life 18, no. 4 (2015), pp. 85-98; MNiSW 2015 List B: 10 points

ABSTRACT: Cryptocurrencies like Bitcoin are offering new avenues for economic empowerment to individuals around the world. However, they also provide a powerful tool that facilitates criminal activities such as human trafficking and illegal weapons sales that cause great harm to individuals and communities. Cryptocurrency advocates have argued that the ethical dimensions of cryptocurrency are not qualitatively new, insofar as money has always been understood as a passive instrument that lacks ethical values and can be used for good or ill purposes. In this paper, we challenge such a presumption that money must be “value-neutral.” Building on advances in artificial intelligence, cryptography, and machine ethics, we argue that it is possible to design artificially intelligent cryptocurrencies that are not ethically neutral but which autonomously regulate their own use in a way that reflects the ethical values of particular human beings – or even entire human societies. We propose a technological framework for such cryptocurrencies and then analyze the legal, ethical, and economic implications of their use. Finally, we suggest that the development of cryptocurrencies possessing ethical as well as monetary value can provide human beings with a new economic means of positively influencing the ethos and values of their societies.

Read more

Utopias and Dystopias as Cybernetic Information Systems: Envisioning the Posthuman Neuropolity

Creatio Fantastica no. 3(50) (2015)

ABSTRACT: While it is possible to understand utopias and dystopias as particular kinds of sociopolitical systems, in this text we argue that utopias and dystopias can also be understood as particular kinds of information systems in which data is received, stored, generated, processed, and transmitted by the minds of human beings that constitute the system’s ‘nodes’ and which are connected according to specific network topologies. We begin by formulating a model of cybernetic information-processing properties that characterize utopias and dystopias. It is then shown that the growing use of neuroprosthetic technologies for human enhancement is expected to radically reshape the ways in which human minds access, manipulate, and share information with one another; for example, such technologies may give rise to posthuman ‘neuropolities’ in which human minds can interact with their environment using new sensorimotor capacities, dwell within shared virtual cyberworlds, and link with one another to form new kinds of social organizations , including hive minds that utilize communal memory and decision-making. Drawing on our model, we argue that the dynamics of such neuropolities will allow (or perhaps even impel) the creation of new kinds of utopias and dystopias that were previously impossible to realize. Finally, we suggest that it is important that humanity begin thoughtfully exploring the ethical, social, and political implications of realizing such technologically enabled societies by studying neuropolities in a place where they have already been ‘pre-engineered’ and provisionally exist: in works of audiovisual science fiction such as films, television series, and role-playing games.

Read more