Jul 30, 2013 - If anyone is in doubt about the power of interactive digital arts to draw crowds ... the 32 Hundred Light company, and show casing Intel's latest tablet interface. With. Luna fun park next door as backdrop, you could record your colour .... a time of austerity measures-â the 30% reduction tells its own story. In ...
CREATIVITY AND EVALUATION SUPPORTING PRACTICE AND RESEARCH IN THE INTERACTIVE ARTS Linda Candy Creativity & Cognition Studios School of Software Faculty of Engineering and IT University of Technology, Sydney www.lindacandy.com
EVA2013 Keynote Talk Tuesday 30th July 2013 London Abstract The paper explores ideas about evaluation in creativity and in particular, how it contributes to creativity in the context of interactive arts practice. Evaluation is a central activity in creativity, one that involves assessing progress, exercising judgments and sometimes changing direction both during and after the process of making an artwork. The ideas presented draw upon experience in HCI user centred design and over ten years of practice-‐based research.
Introduction
In May-‐June 2013, several important events took place in Sydney: the Vivid festival, immediately followed by ISEA 2013 and two conferences on creativity and computational technology, Computational Creativity and Creativity and Cognition 2013. If anyone is in doubt about the power of interactive digital arts to draw crowds and engage people in large numbers, the 2013 Vivid festival put paid to that. An annual festival of light and colour art that lights up the city in winter, it became even more significant than in previous years because it played directly to the general public rather than the arts community. Throughout the cityscape, installations of great variety attracted fascinated adults and excited children and the level of interest and fascination with street art that could be touched and played with was enormous. The sheer weight of numbers overwhelmed the normally underused harbour ferry transport and put extra pressure on the already overburdened train services. For a magical ten days, people could interact with their city in extraordinary ways. They could turn the iconic iron Sydney harbour bridge into a multi-‐coloured panorama with over one hundred thousand LED lights, an installation designed by the 32 Hundred Light company, and show casing Intel’s latest tablet interface. With Luna fun park next door as backdrop, you could record your colour bridge art for posterity with a commemorative photograph. The public took readily to touching and exploring the interactive works placed around the harbour’s edge from the Opera House on Bennelong Point around Sydney Cove to Walsh Bay and on to Darling Harbour. The city buildings were canvasses for visual extravaganzas on which the
effects of interactive participation were displayed. In the gloomy dark of a mild Sydney winter, the city was lit up and transformed into a play space, rather like the effect of Christmas festivals in the northern hemisphere but with the added value of being able to interact with the art itself. The information panels placed next to each installation told an interesting story as the works were as often by architects, designers, engineers, performance artists as by dedicated digital artists and many were commercial companies showing off their new wares. At Vivid, Interactive art emerged from the research laboratory and the art studio and invaded the city streets in a major way. All the artworks seemed to ‘work’ well and the public clearly loved it. At the opening of an event at the Powerhouse Museum, for the associated event, ISEA 2013, the Australia Council representative for the digital arts was needless to say very satisfied: the judgment of the public had secured his area’s funding perhaps only for the present but in today’s austerity climate, that was more than enough. Justification had been achieved by evidence of large-‐scale public engagement. So the job is done is it not? We have reached as state of grace:-‐ interactive art as public entertainment firmly entrenched in the public mind (at least in the southern hemisphere). Could we wish for anything more? Is there more to do? You could argue that Simon Brockwell really did intend his sound light piano work to be a challenge for those of smaller stature. From an HCI perspective the usability of this interface could be better. But is usability for interactive art a top priority for evaluation? Is checking usability the best role for evaluation in interactive digital art? The Trouble with Evaluation Why is the idea of evaluation a problem? Artists in particular often have strong feelings about the idea of having to evaluate their work. They may believe it is not possible, claim it is too hard and not their job anyway, and that it adds to the already heavy burden of creating. The word ‘evaluation’ often has negative connotations of scientific assessment rather than being seen as an integral part of the process In truth, however, artists are constantly
2
evaluating what they do -‐ they may just prefer to do it privately and they certainly don’t call it by that name. What kinds of assumptions are made about evaluation, which make it problematic for practitioners? The very thought of doing ‘evaluation’ in the arts world seems to be a problem for many people when faced with increasing demand for it from funding bodies and governments. These are some of the things I have been hearing as I go about my research. 1.It is necessary but is a lot of trouble –I ‘know what I like’ I don’t need any kind of formal evaluation-‐ is this because the perception is that there is too little return on effort? 2. It is demanded by funding bodies who have to please political master-‐ the UK cultural secretary made much of the need to justify arts funding from the public purse: this is dangerous ground in the eyes of many within the arts because it is the thin end of a wedge to make arts serve external purposes (for good reason?) 3. Evaluation is all about measuring things and there are some things you just cant measure in a quantitative way (like art) or if you want to measure you have to only measure that which can be measured like how much revenue was generated from that project and how many people saw it. All these notions are fundamentally about a view of evaluation based upon negative connotations. Measuring means reduction but can measuring be applied to creativity? Some people think so. Can we evaluate the quality of art by measuring? If you wanted to measure the different quality between the art (paintings) of Picasso and Cezanne how would you go about it? David Galenson, an economist approached it by quantifying the relationship between price at auction and age profile of artist. Galenson used evidence from different judges, an econometric analysis of painting prices and quantified the relationship between age and price. This is an example of using quantified information to measure the differences between artists based upon independently derived judgments of worth; the measures were: 1.Market price at auction (Le Guide Mayer 1970-‐97) 2.Textbook illustrations 3.Retrospective exhibitions Galenson proposed that the variation in prices of a particular painter’s worth could be accounted for, in part, by the values of a set of associated variables (size, support, and date of sale at auction. He isolated the effect of an artist’s age at time of painting a work from other variables and calculated the relationship between age and price. Using this kind of measurement, he showed that Picasso’s most valuable work was painted at age 26 years whereas for Cezanne, high value came much later in life. ‘Les Demoiselles d’Avignon’ is Picasso’s most valuable work in monetary and reputational
3
terms. By contrast, Cezanne’s most valuable works were painted at age 67 years: from the series of paintings of Mont Saint Victoire. Taking age and text illustrations in published works, Galenson shows that the single year, represented by the largest number of illustrations, is the same year estimated to be that or the artist’s peak value: Picasso 26 : Cezanne 67. Now we have to have ourselves is this what we expect and wish the role of evaluation to be in the arts? In my view, if that was all that was happening, most people would agree that, whilst it is an interesting exercise, it rather misses the point about what is quality in art. In fact, Galenson had a different motive for using evaluation by measurement in art for his real interest in artists’ creativity is not the value of an artist’s paintings but in the creative process itself. He used economic measures to understand and differentiate between individual artists’ creative life cycles. What we are seeing here is a way that measurement can be used to probe and structure an analysis of creativity throughout an artist’s lifetime. The measures are not there for their own sake but as a tool for interrogating information from the histories of the artists’ lives The method was applied to modern artists classified according to life cycle style: experimental v conceptual. By examining the careers of painters, sculptors, poets, novelists, he explores the nature of artistic creativity using a wide range of evidence and shows that there are two fundamentally different approaches to innovation, and that each is associated with a distinct pattern of discovery over a lifetime. Experimental innovators work by trial and error, and arrive at their major contributions gradually, late in life. In contrast, conceptual innovators make sudden breakthroughs by formulating new ideas, usually at an early age. By revealing the differences between experimental creative people and conceptual creators, he provides new insights into the lifetime processes of outstanding examples of creativity (Galenson, 2007). What other kinds of evaluation are taking place around art today? Impact measures of evaluation are required to defend projects where public money is involved. For galleries and museums, success relies on: • Measuring attendance • Recording media presence • Gaining national/international awards • Meeting project goals and deadlines • Surveying audience attitudes This is where evaluation of this kind becomes dangerous ground. Does this kind of evaluation really benefit the arts? Does the 10% GDP that the creative industries contribute to UK plc ensure the continuation of public funding? At a time of austerity measures-‐ the 30% reduction tells its own story. In others words impact/economic measures are no guarantee of funding. If the main value in evaluation is to justify the arts in terms of economic value or popularity then we are on a hiding to nothing because the other values in art
4
are submerged under these kinds of criteria. This leads to what Celine Latulipe calls The Value Reduction Problem: ”Justifying artistic and creativity projects by their economic impact is value reductionism. It does not honor the important role that arts and creativity play in the world, and it narrows the framework through which such projects are evaluated. It implies that creativity projects that do not lead to high ticket sales, innovation or economic growth are not worth while”
The Value Reduction problem is that if creativity projects do not give rise to high impact and economic growth, by implication they have less value to society. Latulipe calls upon such people to assert art values over the political and economic values. I agree with that and want to promote the idea of evaluation that is in the service of art practice/creative practice: that can have more immediate impact and at the same time serve creativity in the longer term as well? There is an increasing drive towards finding more systematic ways of embedding evaluation into institutional art programmes and funded projects. Traditionally evaluation has been associated with measuring impact often through simple quantitative outcomes such as footfall and visitor satisfaction indexes. Public Policy and institutional approaches to evaluation have predominated and there has been until recently less attention to the role evaluation can play in the creative process of practitioner artists in the public art sphere. The public art think tank, IXIA, funded by the Arts Council of England, was set up to promote and influence the development of art policies and strategies. In 2004, it commissioned OPENspace to carry out research into ways of evaluating public art. This gave rise to a series of seminars and a guide to evaluation was published in 2009 and revised in March 2013 (IXIA, 2013). This kind of document is useful for scoping the main issues that organisations and individuals need to take on board when contemplating evaluation but there is still a considerable gap between advice and actual practice: practice requires methods and methods need to be learnt and tested. Thus whilst the ixia initiative is important and welcome, it nevertheless forms only one aspect of the evaluation requirements for public art. A recent (and ongoing) survey indicates that in the interactive digital arts, evaluation by impact measures is far from the complete picture (Candy et al 2013). Perhaps because this is a relatively new area that is not in receipt of large quantities of public money or perhaps not. Could it be that it is because this kind of art inhabits a different world to that of the great public art institutions and the people doing it come from a different background to traditional art historians, curators and artists-‐they are also researchers, technologists, scientists and approach the question of evaluation from a more pragmatic perspective whereby the purpose of evaluation is to support the creation of works and to ensure the satisfactory completion of projects. The role of researchers/universities provides a different perspective on the goals of the enterprise and there is perhaps opportunity for a more idealistic view of what this is all about. Collaboration requires the meeting a different mind sets and with that different approaches to assessment.
5
Evaluation in Creativity
I want to promote a different perspective on evaluation, one that views evaluation in the service of creative practice. Evaluation as a formative element in the creative process can have an immediate impact on choices and paths taken and at the same time serve the development of practitioner expertise for the longer term. Evaluation of the kind I will describe below can provide benefit to the creative process and potentially change art practice for the better. There is a more sustainable benefit too in the contribution it can make to developing a different kind of mind set amongst practitioners, whether they be designers, artists, engineers or technologists, a mindset that is conducive to sharing ideas and experiences in collaboration. The ability to share ideas, insights, solutions and experiences, a need that is particularly important in the interactive arts world where inter-‐disciplinary collaboration is often needed in order to make complex, leading edge installations. Evaluation can provide benefit for the creative process, art practice and collaboration and sharing by playing a formative role in the creative process. This involves helping practitioners to: • make decisions during the creative activity • show what happens as a result of actions taken • learn from mistakes • shape future activities • Evaluation that supports creative practice draws upon two sources: first evidence and second, experience: the first builds knowledge, the second builds expertise. Evaluation that is based upon evidence builds expertise and experience: it involves: • Making judgments, based on evidence about value and quality • Learning from experience to improve outcomes in the future • Developing evaluative frameworks By developing evaluation as a tool to strengthen creative practice this supports and sustains future development. Where are the seeds of the future being sown in the creation and evaluation of the new forms of art, many of which are highly interactive and involve audience engagement in a way that is integral to the work? Learning how to embed evaluation in creativity is beginning to happen in the area of Practice-‐ Based Research (PBR) particularly in the type of practice-‐based research where gathering and assessing data is an integral part of the process. PBR of this kind is not widespread particularly in respect of the systematic way evidence is gathered, Systematic gathering of evidence, as distinct from the everyday casual observations that artists undertake during their creative process, requires new skills and a willingness to challenge assumptions. The reason for the current low penetration of evidence-‐based methods in arts practice is, in my view partially due to a reluctance to embrace new approaches to research within practice. Negative views are very entrenched and systematic approaches are often viewed with the suspicion that they take something away from art making rather than adding to it. This is not to
6
under-‐estimate the challenge that evaluation can pose to the time, effort and resources available to the artist embarking on difficult new projects. Practice-‐based research is concerned with making works and reflecting on process, acquiring evidence that informs decision making in the creative process through: • Experimentation • Observation (casual / systematic) • Reflection in action on action Evaluation in Art and Practice-‐Based Research There are multiple dimensions to evaluation in Art, Practice-‐Based Research and Collaboration: • Moving through reflection in action to empirical evidence • Practitioner frameworks and working theories in use • Models of interaction for designing and evaluating interactive art systems • Observing and recording of audience engagement • Analysing information about participant behaviour • Relating and comparing interactive and participant experiences • Following artistic instincts and intentions and satisfying your sponsors Evaluation in Art Practice: artist using evidence
Arts Council of England (ACE) funding for evaluation research at Site Gallery Sheffield. This work is described in the recent paper presented at ISEA 2013 (Candy et al, 2013). Here I will focus on the artist’s use of evidence for evaluation. From the interview and video data several categories of findings were identified based upon the questions about audience response, curatorial design and the effect of carrying out this kind of exercise with the public. To focus on how an artist might evaluate such data: in this case, the artist drew out some interesting observations about audience response to the Shaping Space work. In particular, he observed from the keywords extracted across the 25 participants, a difference between what we have called the analytic versus affective responses. The distinction was in response to the interactivity and experience of the ‘Shaping Space’ interactive work, in which two screens hung in space with back projection of the images and two cameras capturing motion that was fed into the program and influenced the colour range and changes in display elements. The experience of an interactive artwork or installation can work in many different dimensions-‐ see the affective responses in the right hand list of table 1 below.
7
Table 1: Analytic and Affective Responses to ‘Shaping Space’ ANALYTIC AFFECTIVE
How does evidence of this kind provide the artist with a means of evaluating the audience response to their work? First, it is important to note that whilst artists regularly gain insights from observing audience response to their artworks in situ, a more systematic study can provide deeper levels of understanding that do not necessarily come from casual observation. Artist comments:
“The question for me is, what is it about the work that encourages these very different types of response?”
Affective Experience: “My work is intended to work at this level. The reactions were encouraging and affirm the direction I am going in already. At the same time I was surprised at how strong they were.”
Analytic Experience: “The comments on interactivity were unexpected and raised questions about using movement as an element in that particular context. In future work, I plan to change the response mechanisms to respond to the findings. The slow response mechanisms need to be clearer, for example. It will be an experimental process”.
For the artist in this case, the affective responses to the experience of ‘Shaping Form’ were encouraging insofar as they affirmed the direction he was aiming for. Nevertheless, he was surprised by the extent of strength of that response and felt more strongly that he could build more confidently upon an
8
affirmation of that direction. By contrast, however, the analytic responses to the interactive experience itself were unexpected and revealed a variety of questioning responses across the board about using interactive capability that relied on movement. Adults were inhibited in ways that children were not, with many refusing to stay long in the space or consciously avoiding movement altogether and leaning very still against the wall watching the work. This gave rise to some doubt about showing that particular artwork in a dedicated space that acted more like a contemplative cell-‐ a “sacred space” reliant as the work was on physical movement. On the other hand, from the artists’ perspective the audience response might turn out to be a very positive element in changing his existing assumptions about using interactivity in relation to certain kinds of artworks. Evaluation in Practice-‐based Research: a Trajectory Model We have looked at this process of gathering data and evaluating it more systematically with a study of a number of artists doing practice-‐based research which involved evaluation with audiences at ‘Betaspace’ in the Powerhouse Museum, Sydney. Some of it has already been documented in a book which represents ten years of digital art practice-‐based research (Candy and Edmonds (2011). From the study we identified a model of practice and research in which evaluation played a key role in the interpretation of data. The context is PBR in doctoral research and there were common elements such as being able to create artefacts as part of the research process. What you see here is a standard trajectory model whereby the activity moves between practice (making works), theory (identifying criteria and frameworks) and evaluation (documented reflections on decisions and interpreting data. There are many different pathways through the trajectory model and here is one in which the practice itself drives the creation of new theory. This represents the research process of Andrew Johnston whose chapter in the Interacting book provides more detail (Johnston, 2011). His model of different forms of interaction between performer and art system-‐ instrumental, ornamental and conversational was a knowledge outcome from this work (see details in Edmonds and Candy, 2011). The embedding of evaluation and research into Andrew’s practice is demonstrated by the ongoing collaborative projects and new understandings he has achieved from those collaborations.
9
Sean Clark is an artist researching use of systems concepts in interactive art. His framework arising from a simple model of audience interaction with an artwork. As he worked evaluation of his work, his discovery was that in an exhibition with more than one work there was the potential for interaction between the works as well as the audience. His framework moved from systems to ecologies. The trajectory model is being used by Sean Clark to map his practice and research process. Andrew’s Johnston has continued to deploy the framework he developed in his practice-‐based research. The embedding of evaluation and research into Andrew’s practice is demonstrated by the ongoing collaborative projects and new understandings he has achieved from those collaborations. This work and others will be presented in a forthcoming book in the Springer Cultural Computing series in 2014 (Candy and Ferguson, 2014). Evaluation influencing Practice The examples discussed above were: • Light Logic evidence: influenced future direction for making art • Comparing exhibitions of art systems: influenced creation of new framework for art practice • Extending existing framework: from individual to collaborative strategies Practitioners such as these are at the same time as creating new work are developing new forms of practitioner knowledge that will inform their practice on a life time basis.
10
Conclusions
To conclude: I have argued that we need to view evaluation as something to be embedded in the creative process not only to be better able to create new forms of artwork but also to generate new knowledge that practitioners can use to further their professional expertise. To that end, I have focused on the notion of evidence-‐based evaluation from which practitioners create new work and at the same time generate new understandings. To achieve that evaluation becomes an integral part of influencing the process and the outcomes. In the arts this is a new form of evaluation that draws upon HCI and social science for its methodology. I believe it is necessary for the future of the interactive digital arts and perhaps the arts in general, because of the essential collaboration across disciplines necessary to achieve successful interactive arts of the future. Vivid gave us Cybernetic Serendipity for the public at large in 2013. But where to next? For future progress, we need to build a more sustainable foundation for the development of creative practice. I believe that evaluation can play a key role if it is founded on expertise that comes from evidence-‐based approaches. References Candy, Linda. (2012) Evaluating Creativity, in Carroll, J. M.(ed). Creativity and Rationale: Enhancing Human Experience By Design, Springer: 57-‐84. Candy, Linda. and Edmonds, Ernest.A. (2011). Interacting: Art, Research and the Creative Practitioner, Libri Publishing Ltd: Faringdon, UK. Candy, Linda, Edmonds, Ernest, Alarcón, Ximena, Smith Sophy (2013). Evaluation in Public Art: The Light Logic Exhibition, ISEA 2013. Candy, L. (2011). Research and Creative Practice. In Candy, L. and Edmonds, E.A. (eds) Interacting: Art, Research and the Creative Practitioner, Libri Publishing Ltd: Faringdon, UK: 33-‐59. Candy, Linda and Ferguson, Sam (2014). Interactive Experience in the Digital Age: Evaluating New Art Practice, Springer Cultural Computing Series. Edmonds, E.A and Candy, L. (2010) Relating Theory, Practice and Evaluation in Practitioner Research, Leonardo Journal 43 (5) 470–476 IXIA (2013). Public Art: a Guide to Evaluation, IXIA PA limited: http://ixia-‐ info.com/research/evaluation/ Galenson, David. (2007). Old masters and young geniuses; the two life cycles of artistic creativity, Princeton University Press. Latulipe, Celine (2013). The value of research in creativity and the arts. Proceedings of the 9th ACM Conference on Creativity & Cognition, pp1-‐10.
11