Freeform Surface Interfaces Created Using Additive Manufacturing

10 downloads 412009 Views 5MB Size Report
Ing. Corinne van noordenne - API. API is an institute for polymer innovations. ..... pressures product developers to make their products flexible in terms of functionality. ..... forced designers to place these features on the centre console or place them on the ...... Tom's Guide: 12 Remote Control Apps for Android Devices.
Freeform Surface Interfaces Created Using Additive Manufacturing Msc. Thesis S.M. van Bennekom

Chair Prof. dr. ir. J.M.P. Geraedts Professor of New Megatronic Design at the department Design Engineering Mentor Ir. E. L. Doubrovski PhD candidate at the department Design Engineering

Preface This document comprises my graduation thesis executed at the the Industrial Design faculty of the Delft University of Technology. The project spanned from September 2012 through August 2013 and was the finalization of the Integrated Product Design Masters program. It has been a long journey, but I guess it is one for most. I’m grateful for the dilligent and enthousiastic support of my supervising team, with Prof. dr. ir. J.M.P. Geraedts as board chairman and PhD-candidate ir. E.L. Doubrovski as my supervisor. I’d like to take this chance to thank all the people that have supported me during this project. My family and friends foremost, but also the many employees from the Industrial Design, Physics, Mechanical Engineering and Aerospace faculties. I’d also like to thank the many people from companies that helped me along the way. Special thanks to Bram de Smit for his constructive comments. Last but not least, I’d like to thank the people at the lab and at the workshop for their companionship and advice. Steve van Bennekom, The Hague, August 2013

Voor Niels & Pap

Executive Summary The assignment for this thesis was to develop a new visual product feature that is uniquely producable by additive manufacturing (AM) methods. In the first phase of the project the field of additive manufacturing was explored and the visual properties of products were studied. The most important finding here was that some AM technologies are gaining the capability of creating optical product features such as lenses and waveguides. Insight that waveguides could be used to create display features with complex shapes led to the formulation of a vision that describes the emergence of devices with interactive product surface features labeled ‘Freeform Surface Interfaces’ (FSI). It is found that such interfaces would have profound impact for the way products are used and manufactured. Using experimental prototypes the possibility of creating FSI’s using LED matrices and printed lightguide matrices. An important step in this was the realization that LEDs can be simultaneously used for both display functionality and sense functionality. The experiments find that such displays are possible, but that crucial steps eed to be taken in order for such interfaces can become sufficiently reliable. An attempt is then made to link process variables of the printing process used to the observed attenuation in printed lightguides, but this proves unfeasible within the scope of this project, and signifies the need for additional research. Lastly, a proof-of-concept prototype is made to demonstrate both current and expected future capabilities of printed lightguide FSI’s. This prototype and the vision supporting it finds strong validation with professional industrial designers.

3

Contents 1. Introduction 6

Starting Points 9 2. Visual Product Properties 10 3. Visual Features in Design 12 4. Additive Manufacturing 14

Incubation 23 5. Design Directions 24 6. Early Experimentation 26 7. Display Study 28 8. Displays in Products 30

Vision 33 9. Vision 34

Conceptualization 41 10.

Choice of Concept Directions

42

11. Analysis: Remotes 44 12. Analysis: Alarm Clocks 46 13. Analysis: Dashboards 48 14. Concept Development 50

4

Materialization 59 15. LEDs 60 16.

Touch Sensing & Machine Vision

62

17. Fiber Optics 64 18. Creation of a Lightguide Display

68

19. Touch Sensing on a Lightguide Display

72

20. Technological Challenges 75

Evaluation 79 21. Proof of Concept Prototype 80 22. Prototype Validation 86 23. Evaluation 90

5

1. Introduction 1.1 Project Scope and Context The field of 3D printing, or Additive Manufacturing (AM), has been rapidly progressing in the past decades, with multiple additive techniques being pursued. These techniques build shapes by adding small amounts of material together, allowing for the creation of shapes that are difficult or impossible to create using other means of manufacturing. While the technology is mainly used for on-the-fly creation of product prototypes (Rapid Prototyping), it is used more and more for the manufacturing of end user products. It is especially well suited for customizable or user-tailored products. Because AM allows directly specifying how a part is built up, it is possible to create parts which are anisotropic and have varying properties throughout the volume. This enables a designer to integrate multiple functionalities in a part and to create new types of features unique to AM.

1.2 The Assignment This assignment aims to explore applications of AM in designing products with unique visual properties. The starting point of the assignment is developing a thorough understanding of the technology and creating an overview of the possibilities that AM offers in creating part features with optical and visual properties unique to AM. Through experimentation, different possibilities will be explored. The next step is researching how these possibilities can be used to create new products or services. This requires identifying markets where these techniques can be an added value and exploring relevant search fields. The final steps are the designing and detailing of a proof-of-concept application of the explored features. The result should be evaluated using feedback from experts from relevant fields.

Some features include: hollow shapes, double curved surfaces, undercuts, multiple part assemblies, surface reliefs, component inserts and the customization of every printed part. Little research has been conducted into how these technologies can be used to integrate unique optical and visual features into parts. The goal of this assignment is to further such research and explore the scope of applications in the field of design engineering. Tests in the Fablab of the IDE Faculty have shown promising results for 3D printing objects with unique optical properties, such as reflectance and light conductivity. Such features could have applications in a wide ranged of products, from medical devices to entertainment.

Left: Complex, coloured printed geometries. Middle: Printed lens. Right: Material with synthetic subsurface surface scattering structures

6

1.3 Report overview This report is divided into six sections: Starting Points - In this section the field of additive manufacturing is analyzed and the different technologies are compared. Visual product properties are then studied and a small inquiry is done to find out how designers impart visual product properties. The end of this section aims to link additive capabilities with visual product properties. Incubation - In this section the findings from the first phase are used to define three search directions: Printed optics, Colour-FDM printign and Textured FDM printing. These directions are explored to find a direction for the remainder of the project, and further analysis are done to lay the foundation of a vision. Vision - This section expounds a vision for a new product feature created using additive methods: Freeform surface interfaces. Conceptualization - 3 Product directions are chosen and the vision is applied to these direction to develop crude concepts. One of the concepts is chosen to serve as a basis for the final prototype. Materialization - In this section further analyses are done in order to prepare for the creation of lightguide interfaces that comply to the vision. These experiments are then executed in order to show that it is possible to create such lightguide interfaces. Lastly, the capablity of the additive technology used to print optical lightguides is discussed. Evaluation - Lastly, the creation of a proof-of-concept prototype is described and the validation of this concept with experts is discussed. Finally, future research possibilities are discussed and the graduation project itself is evaluated.

7

Chapter

8

Starting Points

At the start of the project, I needed to gain familiarity with both the field of additive manufacturing, as well as with the visual product properties of consumer products.The subsequent chapters covers both analyses which served as a foundation for the rest of the project.

9

2. Visual Product Properties To understand how additive manufacturing could be used to create new visual product features, it was necessary to investigate which such features exist and what material properties affect them. Since the fundamentals of light play an important role in how these material properties relate to visual properties, these were also studied, the results of which can be found in Appendix A. This chapter assumes understanding of the processes detailed in the appendix.

opacity. The transparancy of an object can be expressed by a fraction, expressing which part of light hitting the object is allowed to pass through. The light passing trough a transparant object can be scattered internally by sub-surface

2.1 Visual Product Properties & Features After understanding the fundamental mechanisms of light, a series of mindmaps was made to get a better overview of visual product properties. They can be found in b. The properties discovered in these mindmaps are listed in the following paragraphs.

Refractivity Refractivity is the the degree to which a material changes the propagation direction of light traveling through it.[1] Since this is caused by the phenomenon of refraction, a material’s refractivity is directly represented by its refractive index.

Translucency, Transparancy and Opacity

Fig. 2: A helmet visor (tranparant), measuring cup (translucent), and a lamp (translucent). The helmet body is opaque.

scattering. This means that an image passing through the object is distorted. When a material allows scatters light internally in this way, it is called translucent.[3]

Reflectivity Reflectivity is the degree to which a material reflects light.[4] A material can exhibit both specular reflection, when its surface is smooth, and diffuse reflection, when its surface is of sufficient roughness to scatter light. Often diffuse and specular reflection occur on the same surface.

Fig 1: Two examples of refraction.

Transparancy is the degree to which a material allows light to pass through it.[2] For most materials, this depends on the frequency of light passing through. A material could be transparant to ultraviolet light, but not allow infrared light to pass, for example. Such a material could be called opaque to infrared light, and the degree to which it does this determines its

10

Fig 3: A key reflected specularly (l) and a flashlight body reflecting yellow light diffusely (r)

Colour Color is the property possessed by an object of producing different sensations on the eye as a result of the way it reflects, transmits and/or emits specific spectra of light. [5]

Graphics Graphics are 2d imagery present on the surface of an object.[6] Graphics can take many forms. They can be monocolor or utilize the entire visual spectrum. Graphics are often painted or stickered onto products, but can also come from changes in the surface material or texture. They require incident light to be viewed and can have different properties than the surface material itself.

Surface texture Surface texture is the pattern that we can experience both visually and haptically on the surface

Fig. 4: Various examples of the use of colour to greate surface graphics. While the decoration on the

Fig. 5: Two examples of textures. Note that printed in this report, they are graphical textures. If they would have depth, they would also be physical textures.

2.2 Conclusions Light is a complicated natural phenomenon that has many different interactions with the particles around it. Coming up with a consistent and all-encompassing system to describe visual properties of products is hard for this reason, but also because visual perception is something that happens in the mind of the observer. Howevever, because physics lets us quantify and explain these effects, we are able to describe visual properties as a function of material properties. This chapter has shown that visual properties define key properties of products that not only serve an esthetic purpose, but a functional one as well. This means that if we can influence visual properties using additive processes, it is possible to come up with new functionalities that are unavailable in other means of manufacturing.

helmet is purely esthetic, the barcode on the milk carton serves to convey information.

of an object. [7] The object’s surface structure influences the way light reflects off it and this creates a visual pattern that we can observe, whether it be a regular pattern or an irregular one. When such a pattern cannot be discerned by the human eye the surface is said to be ‘smooth’.

11

3. Visual Features in Design Now that I’ve gotten an understanding of what visual properties there are and what makes them tick, I set out to find out how designers impart these product features.

3.1 How designers consider visual properties Questionnaire I conducted a small questionnaire among design students at my own faculty. In the questionnaire respondents were asked to consider several products and whether they found certain visual properties as important. It also some posed some general statements about the design of visual aspects. The questionnaire and its results can be found in appendices C & D, respectively.

Results In the first part of the questionnaire, respondents were asked to mention a recent design project they did. None of the participants mentioned any visual aspect explicitly in the first question, though some mention related terms. This could mean that they give low priority to visual aspects, or that they always consider it and dont think it needs to be mentioned. When asked which properties are important in a given set of products, colour was deemed important often and in all products. Surface roughness and graphics were deemed somewhat important. Transparancy and refraction were deemed important in two out of five products, but weren’t considered important in the other products. Subsurface scattering was not given much importance across the board. Of 6 properties, participants rated colour, surface roughness and graphics as most important in designing products in general. Participants were divided, however, when asked whether the visual properties of a product are a result of the production process and materials chosen, but tended to agree with the statements: “I exclude materials from a design based on their visual properties” and “I exlude manufacturing methods for a design based on their effect on visual properties” The majority of participants considers themselves aware of the visual properties they assign to their designs, and feel they spend enough time on visual properties. Also, they tended to feel having strong justification when assigning specific visual properties to a design.

12

3.2 Product Study To get a better idea of why and how visual properties I took various product images and listed design rationales for the properties i could reasonably guess. Some conclusions I drew from this were: •

A specularly reflective surface accentuates shape greatly and is used often for this effect.



Smoother surfaces are easy to clean, and are often specularly reflective. We thus associate ‘cleanliness’ with smooth and reflective.



Colour is often used to emphasize certain parts and draw attention. This helps other usecues be recognized.



Graphics are mainly used for branding and usecues.



Transparancy and translucency is often functional.

3.3 Model of design process While thinking about design rationale for additive manufactured products, I came up with a model (Fig. 6) describing some of problems designers are facing. It describes how a designer has to translate his intentions into a model which can be produced by additive manufacturing. The difference between the designer’s intentions and the physically printed product is the effectiveness of the designer’s manufacturing system. This effectiveness can be improved by three steps: •

Improvement in the way a digital model can represent design intentions. In the scope of visual aspects, this covers how well the digital model represents real visual properties and how easily the designer can input them into the CAD system. Since both digital rendering and CAD systems have come quite far, I estimate this direction is the hardest to improve on.



Improvement of the way additive manufacturing technologies can accurately recreate the ideal shape that is formed by the CAD model. If we assume the CAD model to perfectly capture the designer’s intentions, the quality with which AM technologies can recreate this model directly determines the effectivity of the design system.

3.4 Conclusions Design students seem to think they consciously design visual properties into their products, but are not agreed on what properties are important in which products. Looking at products designed by professionals, its clear that every visual property is meticously the result of a conscious design rationale. Imparting a specific visual property into a product requires an excellent grasp of the manufacturing process, from design intent to physical product. It’s apparant from the former that this happens consequently in many consumer products. However, this also means that if new visual product features are to be developed, it is perhaps more important to give designers the tools and knowledge to design these features.

Fig. 6: The Additive Manufacturing Design Process Model



Lastly, designers can adjust their design intentions to match the deficiencies of the other translation steps. This is the way most modern manufacturing systems are made more effective. In other manufacturing processes, such as injection moulding, routing, etc., designers consider the limitations of the technology as a starting point. This adaption is the easiest way to improve the effectiveness of the system, but it requires that designers sacrifice a large portion of possible design intents (and thus design freedom) to ensure effectiveness.

13

4. Additive Manufacturing Now that I’ve gotten an idea of what visual properties there are and how designers impart them, it is necessary to get an understanding of what Additive Manufacturing is and what it enables. Furthermore, It is important to explore which different Additive Manufacturing technologies there are and how they relate to eachother, especially in their capability of influencing visual properties of products.

4.1 Introduction Additive manufacuturing (AM) is an umbrella term for techniques to manufacture parts by sequentually adding small amounts of materials. It has several advantages over traditional manufacturing and is used for increasingly bigger series. While many sources consider the first AM technologies arose in the 1980’s, patented methods to fabricate shapes using layers can be traced as early as the 1890’s[8]. In recent decades, advancements in process control systems and computer aided design (CAD) systems have allowed more advanced AM technologies to become a competitive alternative to traditional methods of manufacturing.

4.2 General Principles This paragraph outlines the main principles behind additive manufacturing and compares it to other methods of manufacturing Additive manufacturing differs from many other manufacturing methods in that it does not produce objects by gradually taking away material (subtractive manufacturing) or deforming material (deformative manufacturing) but adds quantities of a start material and builds up a shape. This leads to two major differences: 1. In additive manufacturing, the process stays largely the same for products with different geometries. 2. In traditional manufacturing, the different process steps lead to constraints for other process steps, limiting form freedom and integration of shapes and features.

14

In order for systems to additively manufacture objects they need to be able to keep adding material without occluding new addition or destroying previously supplied material. This leads all major current AM techniques to use a layered approach to build up objects. Because of the differences described above, AM technologies are very well suited to be controled by a CAD/CAM design system. This is generally done by slicing a three dimensional digital model object into layer which can then be processed by an AM-type machine. Compared to traditional means of manufacturing, AM has several Advantages and Disadvantages [9][10][11] :

Advantages •

The fabrication of structures-in-structures, which is enabled by the removal of lack of many constraints found in deformative and subtractive manufacturing



Very high form freedom



The manufacturing of entire assemblies



Modification of structural properties at and above the micro level



No minimum series constraints, enabling customization and reparametrization of every print.



Very few steps between the conceptualization of a CAD model and the manufacturing of it.



Very economical for low series

Disadvantages •

The manufacturing of large parts is very slow or impossible due to build chamber constraints



Most methods result in a low structural strength



Strict limitations to which materials can be used



Parts produced have poor surface finish



Series more than a few products are very expensive

4.3 Applications Initially, AM technologies such as Stereolithography were mainly used to create prototypes. Even though their build speeds were slow compared to today and machines were much more expensive, they could still offer companies a competitive advantages due to being faster than other methods of making prototypes, such as mold-making.

4.4 Types of AM technologies The past decades a large number of AM technologies has emerged. Most techniques used are known by different names which creates difficulty in categorizing them. The terminology subcommitty of the American Society For Testing and Materials (ASTM) has defined seven standardized term to describe the currently available technologies. [13] Following is a description of each technology, together with its strenghts and weaknesses.

Vat photopolymerization This technique was invented by 3d Systems in the eighties and is most often called Stereolithography (SL or SLA). A basin of liquid UV-sensitive monomers is exposed to a UV-laser which triggers a polymerization reaction to form solid acryl-like shapes. By lowering the basin and sequentially increasing the monomer level, an object can be built up of layers of photopolymer. The technique can create high resolution shapes of reasonable strength, but is not suited for multi-material printing. Furthermore, the usable materials are restricted to UV curable polymers.

Material Jetting This technique involves using printheads similar to those used in inkjet printers to deposit UV-sensitive monomer liquids onto a build platform after which they are cured by exposure to UV light. After curing, lowering of the build platform allows the deposition of new layers. Objects built have properties similar to that Vat Photopolymerization, but can be composed of multiple different materials, as long as the print heads are able to print these simultaneously. When combined with regular inkjet printing, objects can be printed in full colour. [14]

Fig. 7: Applications of additive manufacturing. (Gebhart, 2013)

Now that AM is maturing it is still used to make functional prototypes, but i s st ar t i ng to b e u s e d more and more for d i re c t m anu f a c tu r i ng . Furthermore, it can be used to create tools for other methods of manufacturing, such as the creation of a model for investment casting. [12]

Binder Jetting In this technique, commercialized by the American ZCorp, an inkjet setup deposits a binder liquid onto a prepared bed of gypsum powder. The powder bed is then lowered and covered by a new layer of powder and the process repeats to build up an object. The advantages of this technique are the possibility to incoporate colorants in the binder and the building of objects without requiring support structures. However, multi-material objects are not possible and the resulting materials has poor structural properties and surface finish.

15

4. Additive Manufacturing Material Extrusion Pioneered by Stratasys, this technique deposits paths of molten thermoplastics onto a build platform by heating a printhead nozzle and extruding thermoplastic filament through the nozzle. The paths harden and allow for additional layers to be deposited ontop of previous layers, resulting in a built object. This technique is very cheap, but has quite a few drawbacks. It has produces low surface quality objects mainly due to its relatively thick layer size. Objects built are highly anisotropic depending on the way paths are generated. However, printed parts can be functional and have a decent structural strength. For many shapes support structures are necessary which at the moment have to be snapped off after printing. The technology can switch printing heads to print multiple materials and multiple colours, but is currently restricted to thermoplastics.

Powder Bed Fusion In Powder bed fusion a high intensity laser fuses powder particles together to form a build shape. Stepwise lowering of the powder basin and covering it with new layers of powder allow for a layered object to be built. The powder particles can be a wide array of different materials, ranging from polymers to metals and even sand. The type of laser required depends on the material used, and for most metals an electron beam laser is required. Powder bed fusion produces objects of high material strength but has low resolution compared to polymerization techniques. The technique is also not suited for multi-material printing.

Sheet Lamination This technique stacks layers of paper or plastic foil and cuts shapes from each layer using a blade or a laser. The resulting objects are of inferior tensile strength compared to other techniques and the build resolution is fairly low. Multicolour prints are only possible by varying color of the layer, restricting coloring to one dimension. Multi-material printing is not possible and the method tends to generate a lot of waste.

16

Directed Energy Deposition In directed energy deposition material is deposited by feeding powder from a nozzle and then heating it using a laser and thus melting it to form an object. It can be made to print a wide range of metals but is limited to single material printing. While the metals printed can have high strength, the build resolution is very limited.

4.5 Capability overview Using information gathered from data sheets for the aforementioned technologies, the following tables compare the various technologies. The first table lists the different technical capabilities (Fig 8.) The information of this table, together with the research on visual properties in the previous chapters was then used to generate the second table (Fig. 9), which lists the ability of each technology to influence visual product properties of products.

Technique

Names/Brands

Brands

Resolution (DPI)

Layer thickness (mm)

Tensile Strength (MPa)

MultiMaterial

Multicolour

Supports Needed

Assemblies

Materials

Vat Photopolymerization [15]

Stereo-lithography Apparatus (SL, SLA) 3D Systems

3D System

750x750

0.016

20-40

Possible

No

Yes

No

UV Curable Acrylic Plastics

Material Jetting [16]

Polyjet, Polyjet Matrix

Objet

600x600

0.016

30-65

Yes

Yes

Yes

Yes

UV hardening polymers

Z-Corp

600x540

0.09-0.10

14-26

No

Yes

No

Yes

Proprietary Materials

Binder Jetting [17] Material Extrusion [18]

Fused Deposition Modeling (FDM), Fused Filament Fabrication ( FFF )

Stratasys Reprap

N/A

0.178

36-70

Yes

Yes

Yes

Yes

ABS, PC, ULTEM, PPSF, PLA

Powder Bed Fusion [19]

SLM, SLS

EOS

360x360

0.02-0.10

50-90

No

No

No

Yes

Various polymers and metals

Sheet Lamination [20]

LOM

Solido MCOR. a.o.

250x250

0.1

N/A

No

Yes

No

N/A

Paper, Plastic

Directed Energy Deposition [21]

LENS

Optomec

2500x2500

0.0001

N/A

Yes

N/A

N/A

N/A

Specialized inks.

Fig. 8: Table comparing seven major AM technologies. References for values found listed in leftmost column.

Fig. 9: Table showing the capability of various AM technologies to influence visual properties realistically. Green signifies possibility of influencing, red signifies no influence.

17

4. Additive Manufacturing 4.6 Trends The field of additive manufacturing has been growing rapidly the past few years despite two major financial crises. Following are some trends surrounding it:

Growth in services revenue The growing sale of AM machines and the willingnes off people to use them has spawned an exponentialy growing industry that provides AM services by printing products on demand. On the one hand this is fueled by businesses identifying additive manufacturing as a reliable method to produce prototypes and small series fast, and on the other hand consumers are increasingly sending their creations to be manufactured at companies like Shapeways and Materialize. 3d Printing watchdog Terry Wohlers predicts AM related services will be a 3 billion dollar industry by 2016 [22]

Growth in the number of personal 3d printers The expiration of Stratasys’ FDM patent has resulted in the possibility for the Reprap project, a project attempting go build a self-replicating machine, to grow far beyond its humble beginnings. The specifications for the Reprap are open source. Consequently, after only a few years hundreds of variants on the popular hobbyist FDM printers exist and their number is growing very fast. [23] This is even recognized by 3D Systems, who countered by launching their Cubify desktop printers in 2012.

Growth in the number of patents An important sign that AM is a rapidly maturing industry is that the number of AM-related patents is also growing exponentially. Companies are fighting for a share of the growing market and want to protect their innovations. Not so long ago 3D Systems filed a lawsuit against Formlabs, a startup trying to make an economical printer working on the Vat Polymerization principle. With AM industry giants constantly taking over smaller companies a patent battle seems very possible. [24]

18

Overview Various trends in Additive manufacturing, as well as forecasts by experts in the industry have been plotted in Figure 10.

4.7 Conclusions •

Additive manufacturing is a rapidly developing family of technologies that allow for unprecedented form freedom combined with almost no lead time to manufacture



AM is best suited to directly manufacture unique products, small series, or in combination with other manufacturing methods.



The AM industries are rapidly growing and there is an interest from hobbyists in printing their own designs on both services as well as desktop printers.



AM is most limited in the materials that can be manufactured and the detail of the printed materials.



Material Jetting (Objet), Binder jetting (ZCorp) and Filament extrusion (Stratasys, Reprap) seem most promising for the generation of new visual product features.

Fig. 10: Diagram showing the growth of the AM industry. [25,26,27]

19

4. Additive Manufacturing 4.8 Expert opinions & Events During the analysis phase discussions with various experts in the field of AM allowed me to be inspired, get insights and get feedback on my ideas. During these discussions, my main question I tried to answer is which directions the different technologies were going and how they could be used to influence visual properties. More detailed questions were regarding specific problems I was dealing with at that time.

optically smooth surfaces. They are still working on increasing the height of their prints, which at the time of my visit were limited to a few millimeters. Their focuses at the time were on lens systems for the lighting industry, and graphical applications.

Dr. Ir. René Houben - TNO TNO is a large research institute in the Netherlands that focuses on applying scientific knowledge in practice. Their section in Eindhoven has a department that researches new additive manufacturing technologies. I was given a tour of the facility by René Houben who has worked extensively on printing UV-polymers and high viscosity liquids. While I could not see some restricted technologies, the one that stood out the most was a set-up that uses multiple sequential stationary print heads and moves the build objects to achieve extremely high print speeds, capable of printing 100 unique small products in 6 minutes. Furthermore Dr. Houben was interested in printing graded index of refraction materials to enable refraction of light in arbitrary build shapes.

Marco Visser & Joris Biskop LuXeXcel LuXeXcel is a young Dutch company founded by people with experience in the lighting industry. Currently they are able to print multicolour high quality lenses, prisms and reliefs by modifying existing inktjet systems. They use a patented method in which they they harden the UV-monomers in a specific way to ensure

Ir. Bram de Zwart - Freedom of Creation Freedom of Creation is a Dutch company that pioneered the use of AM techniques to manufacture customized design products. I visited their office in Amsterdam and spoke with Product Manager Bram de Zwart about their activities. They were recently acquired by US giant 3D systems and are now actively selling the Cubify 3d printers. They’ve also launched a new brand, ‘Freshfiber’ that sells customizable 3d printed products. Unfortunately they could not help me with any info on new materials or new techniques, but gave me a good insight in how FDM and sintering techniques are used in the production of consumer products.

Ultimaker Evening

Fig. 11: Building housing the TNO research department on Additive Manufacturing

20

Fig. 12: LuXeXcel Headquarters (left), Freedom of Creation’s blackboard (right)

Unfortunately I could not arrange a visit at the Ultimaker company, but was invited to join one of the ‘Ulti-evenings’. These are monthly events where Ultimaker employees and Ultimakerowners and enthusiasts meet, discuss new developments and show what they have created. I got a good sense of the tight community of hobbyist tinkers surrounding the Ultimaker project. I also got some feedback on some of my ideas regarding multi-colour printing and heard about the latest developments in the community.

Ir. Betty Oostenbrink - Avans Hogeschool I met Ir. Oostenbrink at the Ultimaker evening. As a lecturer in organic and polymer chemistry she mentored students in extruding their own filaments. I spoke with her about the possibilities of making custom-made filament and the possibility of incorporating thermochromic pigments.

Prof. dr. ir. Wim Poelman - Visiting professor TU Delft

Fig. 13: 3D printing event (l), Ultimaker evening (m), Dr. Adrian Bowyer (r)

3D printing event This event was held during the Dutch Design Week in Eindhoven. It was an inspiring day where I got a feel for the variety of companies and people working in additive manufacturing. I also got a better sense of how the technologies were applied in a business context. I also made a few contacts on this event that were of help later in the process.

Dr. Adrian Bowyer - Reprap Dr. Bowyer is the founder of the Reprap project. In a Skype call we spoke about the project and about innovations in filament extrusion technology. His team is currently working on multi-material printing and multi-colour printing. When I asked him about printing optical features he said the biggest issue was getting a smooth surface area of form features and that there were currently no ways to achieve this. The biggest thing I took away from the call was to think of concepts in nature as an inspiration for techhnological innovations.

Ing. Corinne van Noordenne - API API is an institute for polymer innovations. Via Ing. van Noordenne I was able to get some information on thermochromic pigments.

Prof. Poelman gave me feedback on my product vision and suggested many technological developments that could be seen as working towards it, such as colour-changing gels, printable O-LEDs, laser-sensitive pigments and light-emitting polymers.

Ir. Rolf Koster I talked with Ir. Koster about the possibilities and difficulties of creating thermochromic polymers for use in FDM extrusion.

Dr. Ir. Sander van Zuilen - Delft Aerodynamics lab Ir. Ricardo Pereira - Delft Aerospace Contact with Dr. van Zuilen at the Aerodynamics Lab and Ir. Pereira at Aerospace engineering allowed me to get feedback on the use of fluid flow models for the generation of lightguide paths. They also gave me insight in how to use finite element modeling to solve flow problems numerically.

Conclusions During the discussions the overall momentem of additive manufacturing was very clear. I’ve gotten a clear sense of the capabilities of the different technologies, as well as the variety of ways in which they are used.I also learned that the technologies cannot influence most properties by printing alone, something that is echoed by the literature research. For properties like colour and refractive index a change in materials is necessery. Consequently, technologies that can print a large variety of materials seem best fit to create new visual product features.

21

Chapter

22

Incubation

I’ve learned a great deal about additive manufacturing and the design of visual product properties, but it is yet unclear what path to take in the rest of the project. In the coming chapters I experiment in several directions and start to investigate displays as a product feature.

23

5. Design Directions Based on the findings in the previous section I formulated three directions to investigate as design directions for the rest of the products. They are outlined here.

5.1 PrintOptics The freedom in creating complex geometries that AM technologies offer, combined with the ability to manufacture multiple materials with differing optical properties provides opportunity for the creation of products that are wholly printed and contain new optical features. At a company visit at LuXeXcel, a Dutch company working on printing multi-colour 3d objects I’ve gotten a glimpse of what’s to come. They have worked on improving the smoothness of transparant printed object surfaces to create lenses. (Fig 14) As of yet, they are limited in printing height, printing on big flat sheets. During my visit, I was assured they are working on this and that they expect to create more innovative features when the height is increased.

Fig. 15: Transparant object with internal coloured spiral printed by TNO Eindhoven

Fig. 14: Printed lenses by LuXeXcel

A presentation of René Houben at TNO, a leading Dutch research institute, showed me that they as well are working on improving AM technology to create new features in transparant objects (Fig 15). In a recent paper by Disney Research[28], they solidify the idea that AM can create new optical product features. Some of the features they demonstrate are: •

Printed ‘light-pipes’, short tubes that function like optical fibers, conducting light signals. They show that these can be used to transfer images and signals between surfaces, enabling touch-screen-like sensing on the surface of objects.



Air pockets inside volumes that can be illuminated. As a possible application they show in-surface displays and volumetric displays allows for the creation of internal displays.



Internally printed geometries that when deformed cause light to follow a different path and therefore can act as integrated buttons and accelerometers.

From this I conclude that AM technologies will indeed result in new optical product features that are hard to foresee now. Additionally, as my questionnaire showed me design students tend to consider other visual features more important, I feel that this is an important direction to pursue to make a case for these features and provide a proof-of-concept.

24

Fig. 17: Schematic of light pipes printed by Disney Research (l), and their vision of building complete products by building them (r), taken from [

5.2 Multi-colour Fused Deposition Modeling After Stratasys’ patents on fused deposition modeling expired, the foundation of the reprap project has spawned a large family of cheap and accessible material extrusion printers. In a survey by Moilanen & Vadén (Fig. 17) an overwhelming group of respondents had used a printer from the RepRap family [29]. A whole community of so-called Makers has sprawled around these accessible printers. They are people who tinker with their printer, experiment and as a collective, drive the technology further. The Dutch company Ultimaker makes such printers, and I’ve experienced the openness of their community first-hand during a so called Ulti-evening. When looking at the results of the previous chapter, it is perhaps not surprising that a lot of work is being done to attempt continuous multi-colour printing on FDM machines. (Figures 19-21) There seems to be a

strong demand for multi-colour printing among users, as can be read on many of the forums where they frequent. While the Stratasys machines and their cheap counterparts have no trouble printing different discrete colours from a palette of filaments, what users seem to want is printing continuous ranges of colour. The current interest in multi-colour printing, together with the results from the questionnaire and product study lead me to conclude that multicolour FDM is a direction worth pursuing in this project. I’m interested in improving the technique itself, but also in looking at how such a technique could be applied in designing.

Fig. 17: Pie chart showing printer use from a survey by Moilanen

Fig. 18: Example of a product not

& Vadén

possible to print with FDM

Fig. 19: Samples coloured with mixed

Fig. 20: Multi-coloured prints by a member of the

filaments

RepRap community

5.3 Textured Objects One of the strengths of additive manufacturing seems to also be a weakness. The layered-based buildup of material results in a coarser surface texture compared to other techniques, such as injection moulding. There is a percieved link between surface finish and product quality, which currently makes most printed products undesired for mass consumption. During discussions with maker enthusiasts and experts in the field, I gathered that surface quality is a big deal, and much work is being done to improve it. Meanwhile, I see an opportunity for additive manufacturing to create new surface textures on massproduced products. Additionally, because surface texture has a connection not only visual properties such as reflectivity, it also influences tactile experience in users. While the topic of this assignment is new visual features, I think this direction is too interesting to ignore. I would like to experiment with surface textures unfeasible using other methods of manufacturing and I’m interested to see what functional applications there could be.

Fig. 20: Continuous colour mixing by Pia Taubert at the

Fig. 21: Filament colored using stained

Reprap Lab in Bath

filaments by a member of the RepRap community

Fig. 22: Examples of textured objects.

25

6. Early Experimentation After setting three searchfields for the start of the conceptualization process, it was time to get hands on experience with these different directions and find out whether they inspired me and offered a basis for the remainder of the project.

6.1 Printed Optics

was able to reprint a previously made model to see the effects of the two colours. The result shows that the two filaments were mixed by the Ultimaker’s heating nozzle, but that a ‘toothpaste-effect’ was still visible (Fig. 24). One side of the print was slightly more yellow while the other side ws slightly more reddish. The effect is hard to capture on photo, but can be seen by the naked eye.

The first step in investigating this direction was attempting to replicate the results from a paper by Disney Research [28] that pioneered the fabrication of waveguides in an Objet machine. Two blocks of VeroClear material containing hollow tubes of Support material were created, in different build orientations. Both blocks contained five straight tubes and five curved tubes

Fig. 24: Block of Objet Veroclear and Support materials with waveguides printed internally. Contrast added in the right picture for clarity. Two of the waveguide endings are circled in red.

Fig. 23: Block of Objet Veroclear and Support materials with waveguides printed internally. Contrast added in the right picture for clarity. Two of the waveguide endings are circled in red.

that spanned a 90 degree angle. The tubes were varied in diameter. While it was clear that some light was internally reflected, the blocks mainly scattered the light and as such I was unable to replicate the results by Disney. However, successfull test prints made by my mentor showed that the fault had probably been sealing the ends of the tubes with 1mm of VeroClear material to prevent the support material from leaving the block.

6.2 Multi Colour FDM By inserting two 1.75 mm filament that were meant for use in a commercial Cube 3D FDM machine into the Ultimaker extruder which takes 3mm filaments, I

26

In searching for other methods of achieving multi-colour printing with FDM-machines I investigated the use of thermochromic pigments (Fig. 25) in printer filaments. Such pigments change colour when their temperature changes, either temporarily or permanently. Unfortunately, neither of the institutes that I contacted with my request for more informaiton about thermosetting , which change permanently under the influence of heat) pigments replied to my request. Fig. 25: Gel containing thermochromic pigments. Further investigation in this very complex field showed that the temperature range in which these polymers change is much lower [30] , around 70 degrees at best, and as this range is very much lower than that of a FDM nozzle required for printing, this direction did not seem to offer further possibilities.

6.3 Textured FDM Using an Ultimaker FDM machine I printed several shapes to experiment with printing textures. From observations earlier in the process I knew that the anisotropic properties of the ultimaker would somewhat prevent me from creating textures in the Z direction, so my conclusion was to try to see the effect of this anisotropy and try to use it to my advantage. Using a Blender, a 3d Modeling package, I modeled three shapes. They were then sliced and printed using the ReplicatorG software. The first object has an hourglass texture that was made to study how the printed curved material reflected light. The object very clearly shows a diamond-like reflecition pattern, but individual layers and printing imprecision introduce jagged interruptions in the reflection gradients. The second print was an attempt to create a fishscale texture to study how sudden changes in height could generate dark and light spots.



Printing lightguides appears to be possible using polymer jetting. This avenue offers inspiring possibilities in the transfer of light signals through products. Furthermore, there could be other possiblities such as the fabrication of lenses, prisms and other optical product features as integrated product features by themselves, or as part of one.



While it appears to be possible to create textured surfaces on FDM-printed objects in one direction, I have not found any possibilities or avenues that differ from what is already being done.



Multi colour printing as I imagine it being a possibility of continous colour ranges throught products does not seem to be possible using current FDM technology.

Based on these conclusions, The printed optics direction seems to offer the best chance of developing a new visual product feature during this project. This warrants further investigation into this direction.

The last print was a half sphere that was given a random heightmap texture. This object clearly showed that while the texture is shown convincingly at angles close to the Z axis, the texture effect loses definition the more it is printed perpendicular to the Z axis.

6.4 Conclusions From these exploratory experiments I concluded the following:

Fig. 26: Printed fishscale texture (l), hourglass reflection study (m) and randomly textured dome (r).

27

7. Display Study In the previous chapter I decided to further pursue the printed optics direction. This directions seemed to offer the potential of creating displays. Therefore, this chapter describes my attempt of finding out what defines a display and what is needed to create one.

7.1 DIsplays as product features We can find explicit information on the surfaces of most mass-produced consumer products. The decision to put that information there is made consciously somewhere during the lifecycle of the product, and can therefore be regarded as a product feature. I choose to call this feature a display.

Static displays A static display (Fig. 27) always displays the same explicit information. While the interpretation of a user might vary with each use, or over time, the information on the product itself does not change. This means that the information cannot interactively relate to the current product context. Static displays tend to contain information that is deemed important by the manufacturer of a product at the time of manufacturing. Because static displays need to stay relevant in varying contexts, their content is often kept ambiguous.

When speaking of a display as a product feature the foremost association would be to think of some type of video display: a Liquid Crystal Display (LCD), LED matrix, E-ink screen, etc. However, in general the verb display is used in a much broader sense: Display: to put (something) in a prominent place in order that it may readily be seen. (Oxford English Dictonary) Consequently, a display as a product feature could be defined as:

An area or volume used to present the user with visual information

When using this definition, two main types of product features can be discerned: 1. Surface displays - these present 2d information on a product surface 2. Volumetric displays - these present 3d information in a volume For the rest of this report, all mention of displays will refer to the former.

7.2 Types of surface displays There are many ways in which different types of surface displays could be categorized. Since the primary purpose of a display is to convey visual information, the most obvious categorization covers the type of information that the display can convey. In this categorization I have found there to be three different types of displays: Static, semi-dynamic and dynamic.

28

Fig. 27: Two examples of static displays. On the left, a logo using laminated graphics. On the right, a usecue created using embossing.

Semi-dynamic displays Static displays do not suffice for products that need to change the information they convey to the user. If one or more elements of a display can change state, this display can no longer be considered static, but must be considered dynamic. (Fig. 28) The range of information that such a display can cover is limited by the way its changeable elements work. It might be a light switching on or off, some mecanical dial moving or some other changeable element. The essence is that a mechanism inside the product causes an element on the surface of the product to visibly change, and that this change is discernable by a human user.

When a display is made to display a given information about a predetermined set of variables, but this set cannot change due to the makeup of the display, the displays is Semi-dynamic: The display cannot provide information about new variables that were conceived during the lifetime of the product. For example, a clock might have seperate dials for hours, minutes and seconds, but it will not be able to express the temperature. A switchboard might indicate the status of all machines in an engine room, but when it needs to cover a new machine it needs to be expanded.

Fig. 29: A large number of changeable ‘pixels’ cause the emergence of a dynamical display

7.3 Hierarchy The different types of display are strongly related and show a hierarchy. In general, more dynamic displays can often emulate more static ones. A clock can emulate a picture of a clock when it stands still. A computer screen can emulate a matrix clock, or a static image. Semi-dynamic displays are sometimes made up of different static elements, in the case of a wall clock, for example. A dynamic display can comprise of several semi-dynamic elements.

Fig. 28: Different examples of semi-dynamic displays.

Dynamic displays When the number of elements in a semi-dynamic display grows, the number of combinations of elements grows exponentially. Because the combination of different elements can start to carry meaning in the form of symbols, geometries and symbols, the display can start to show information about anything that is understandable by the user. This transition makes that the display can now be considered to be a dynamic display. (Fig. 29) While the display’s characteristics still determine the way in which the information can be presented, the user starts to become the limiting factor in whether this information can be correctly conveyed.

It must be noted that the borders between the latter categories are fuzzy: It is not exactly clear when a display transitions to be dynamic, because the interpretation of the display is done by the user.

7.4 Conclusions When looking at displays as product surface features, three categories can be discerned: Static, semi-dynamic and dynamic. When a product has a large number of changeable visible elements, it starts to be capable of representing context-specific information and convey new meaning, symbols and variables that might not be conceived in the design and production of the product.

29

8. Displays in Products Its become clearer what a display is and when something becomes a display. But an important question has been left unanswered: How are displays designed into products? This chapter describes a product study aimed to answer that question.

8.1 Product study: Displays The study in this chapter aims to find out how displays as categorized in the previous chapter are represented in modern-day consumer products. The setup of the study was as follows:

Intent Take apart 3 products. Examine how they are built up, and answer: •

What type of displays and control elements does the product contain?



What functions does the product have?



What is the design rationale behind the display and control elements?

Criteria

Results Among a collected range of products three were selected: •

An alarm clock



A digital clock switch



An MP3 player

These were disassembled and their display elements and control elements can bee seen in Figure 30. Design rationales were estimated and are shown in the same figure. Pictures of the dissasembled products can be found in Appendix E.

8.2 Conclusions & Observations From the study I conclude the following: •

Products tend to have many more parts than their outward appearance would suggest. This implies that the designers strive for a unified and smooth product appearance which is a jarring contrast with the actual internal layout of the products. The insides of the studied products are a complicated mess of support structures, electrical circuit boards and electrical and mechanical components.



The placement of control elements and displays are influenced by several constraints, such as geometrical, fabrication-related, assembly-related, technological and financial.



Due to these constraints, a change in placement of control elements or display elements very easily results in a far-reaching consequences for the way the product is constructed.



The products are not designed with parametric flexibility in mind



The internal buildup of the products is designed around an assembly process.

The products studied were selected based upon the following criteria: •

They must contain at least 2 types of displays and at least 2 control elements.



They are mass-produced consumer products



Functionalities of product are of an interactive nature.

Process Take apart each product and list display elements and control elements encountered. Reason at the design rationale for the use of those specific elements.

30

Functional element

Solution Found

Design Rationale

Alphanumeric display

Segment LED display is connected to circuitboard with ribbon and supported by a plastic support structure. Circuitboard can be main board or subassembly board. Transparant composite material housing covers display.

7-Segment LED displays are common and cheap. Connecting to the mainboard with a ribbon allows for a flexible setup and keeps wires ordened. Support structure allows the display to be close to the housing and is probably easily changed parametrically to fit a changing housing. Transparant housing part allows for somewhat curved shape of display.

Toggle display

LED integrated with display above. Discription of toggle value printed on transparant housing part.

Integrating with the alphanumeric display is cheaper than using seperate LEDs for mass production.

Slide display

Flexible plastic filament wrapping around a wheel. Turning the wheel shortens the filament which consequently shifts beneath a transparant piece of plastic to form a slide display.

Uses a simple mechanical principle to convert rotation into translation. Requires no electricity or linear actuators and is therefore probably cheaper.

Slider button

Plastic part covering electrical switch component connected to circuitboard. Part slides in part of the housing or in a designated support structure. Circuit board connects to main board via wires or ribbon.

Uses standard parts for the switches but allows for easthetic design of the buttons to match the rest of the product. Subassembly circuitboard allows for a more flexible placement of the buttons in the product.

Rigid press button

Composite plastic part molded to the housing. Flexible part allows the button to move, rigid part forms the button surface. Button connects to a push button component on a circuit board.

Multi-materical injecion molding of the buttons with the housing is probably cheaper than the assembly of small button parts.

Soft press button

Rubber-like part that fits in the housing and connects to push buton components on a circuit board.

Soft part has a different feel than rigid button. Also, when the button is pressed too hard the rubber deforms and does not damage the underlying circuitry.

Inset reset press button

Button is small and lowered in relation to the housing which makes it impossible to press with fingers.

The button is made hard to reach so it will not be pressed accidentily but requires the user to find a specific tool to press it.

Wheel

Plastic part connected to a rotary potentiometer on a circuitboard. The wheel is placed at the shell division line.

Potentiometers are common and cheap. The placement allows for the assembly of the shell without causing problems.

Button labels

Text printed on housing in black or white ink.

Ensures that the right label is at the right button and eliminates the need for printing on multiple parts.

Safety labels

Either pasted on with a sticker or embossed using injection moulded.

Injection moulding is viable for larger product series with little or no changes, and is only usable when it does not result in a mold that does not release the part. Stickering is cheaper in smaller series as the die will be cheaper than one with embossed text in it.

Fig. 30: Table containing the results of the product study found in appendix E.

31

Chapter

32

Vision

The possible opportunity of creating displays using printable lightguides brought forth a clear vision for the future of displays in products. In the coming chapter I explain how and why freeform surface interfaces will come to become a product feature in the near future.

33

9. Vision In this chapter I will present my vision for a new visual product feature which I have named Freeform Surface Interfaces (FSI). First the vision will be briefly outlined, after which I will support it with indications pointing towards the emergence of FSI’s. I will then give an overview of what FSI has to offer in terms of affordances. Finally, I estimate when crucial steps toward the realization of the vision will be made.

9.1 Freeform Surface Interfaces In the current assembly-centered manufacturing paradigm, dynamic displays are almost exclusively rectangular flat surfaces that are embedded in a product. While this is useful for many applications where the content displayed is also rectangular, it limits product designs in form freedom and requires products to be designed around the display, rather than vice versa. Based upon the developments and findings discussed in the previous chapters, I envision the emergence of consumer products with freeform dynamic touch interfaces on them in the near future. Such features will act as an conduit to make increasingly complex products more understandable and accessible for users. Additionally, they will help to parameterize products designed for additive manufacturing systems. I define a Freeform Surface Interface product feature to have the following properties: 1. It provides dynamic visual information A FSI should be seen as a rich tool for presenting information to the user. The complexity of a FSI is higher than that of a simple LED blinking or a dial moving and in that sense it should be seen as a Dynamic display as discussed in the previous section. As such, it has a very high number of switchable elements that allow it to depict symbols, text and graphics, on par with flat displays as used in laptops, smartphones and tablets. 2. It can be incorporated on freelyform surfaces A FSI is applicable on a wide range of double-curved and otherwise shaped surfaces that need not be continous. These surfaces are on an external product surface that is reachable and perceivable by a user.

34

3. It is applicable to consumer products Fig. 31: Illustration of how I envision rectangular display components embedded in products will evolve into full-surface interface features.

If FSI are to improve the way users interact with products in comparison with modern-day products, they need to be a reasonable alternative in terms of cost. This means FSI’s should be producable by an industrial mass-manufacturing process. 4. It has the capability to sense touch input from the user In order for a FSI to be applicable across the entire product surface it needs to span areas that normally contain buttons and other control elements. The envisioned FSI’s would therefore be touch sensitive so that they can replace those control elements.

9.2 Indicators of the emergence of FSI’s While the product study in the previous section makes a strong point for the simplification of products by the elimination of analog control features, it’s not yet clear why FSI’s have a raison d’etre from a user-centered and techno-historical perspective. Several trends indicating a fitting milieu for the emergence of these features are discussed below.

The Personal Cloud In computer and entertainment products, more and more digital services and data are accessible on a range of digital devices, ranging from smartphones, tablets, computers and even watches. Content providers are structuring their services to be cross-platform. [31]. Users have a set of personal data and services that they want to access when they want, how they want. This pressures product developers to make their products flexible in terms of functionality.

Research into curved displays During the last decade a lot of research has been done on the creation of curved displays [33]. Most approaches have used back-projectors combined with some form of image recognition software to provide information about user manipulations. Researchers in ubiquitous computing realize the potential of curved touch-sensitive displays in facilitating ‘Direct, Easy, Walk-Up-and-Use Interaction Experience’ and ‘an ecosystem of heterogeneous display devices, small and large, flat and curved, each serving a particular purpose’ [33]. However, they also identify challenges, both in making curved displays usable for multiple users, and in finding benificial applications.

The Internet of Things First coined by Kevin Ashton [32], seeing RFID tags as a way to make objects communicate this term denotes the coming of a communication network across devices and objects that helps computersystems make smart decisions about the state of those objects. In such a network products need to communicate their intent and states, but also process data from other devices and react to that data. In such an environment, products that can process and show information in a context specific way clearly have an edge over those that cannot.

Consolidation of products Due to minutiarization of integrated circuitry , other components and the development of new materials, products are increasingly able to sense, process and output information. This leads to more and more products integrating functionalities of other products (Fig. 32). A prime example is the smartphone which has transformed from a way to communicate wirelessly to a product able to emulate dozens of other products. In turn, this development fuels the aforementioned trends. Fig. 32: Examples of products consolidated by the modern smartphone.

35

9. Vision 9.3 Affordances of FSI’s Now that we’ve seen what is meant by a ‘Freeform Surface Interface’ and what indicates their emergence, this chapter expounds what consequences they have for the way users use their products.

Context aware features When products have the possibility to display dynamic information on their surfaces, this means they can adapt this information to fit with the current use context insofar as the product is able to anticipate that. Information that would be static on current products but is only relevant during some stages of use, would then not be displayed during the stages where it is not relevant. An example is a razor that only tells you to clean its heads when its heads are full, a vacuum cleaner that tells you when to empty its bag, or a computer mouse that tells you when to change the battery and where to open the lid.

Software based functionality in the physical domain Comparable to software on touchscreens in smartphones and tablets, a surface display would be able to have control elements on any place that the display covers. Consequently, the form, function and location of a control element could be changed during the product’s lifecycle by a change in the digital representation of the element. In other words, firmware on the product determines where the buttons on the product are. This has many implications for the way the products can be used, but also for the way products age. Lastly, product creators could create different functionality on the same physical hardware and license the models differently.

36

User tailoring and customization Because control features can be altered by changes in software, products can be user tailored and customized on-the-fly. The same product can work differently for different users, while retaining its core functionality. For example, in an electric toothbrush the button could move to accomodate a child’s hand. In a television remote, buttons could become large and less numerous in a remote for a grandma, while the same remote extra buttons for replay capabilitie when used by a tech-savvy sports fan.

Dynamic product esthetics When the display quality of FSI’s becomes sufficiently high, changes in product esthetics will be made possible. The same product could be made to look different based on branding, context factors or user preferences. Colour, texture and graphical elements could change by adjusting the software that controls the display. Products could become more chameleon-like and blend in with their environments or with other products nearby.

Smart Sensing When products can sense where a user touches them on the surface, they can use this informaton to deduce the intention that the user has in using them. Products could adapt functionality or start or stop it based on how they are held by the user. For example, a heavy duty drill could slow down for precision drilling when it is held with two fingers, but speed up for heavy duty drilling when grabbed from the back. A kitchen blender could stop blending as soon as the user does not use two hands to touch it to prevent accidents.

Consolidation of Products Ultimately, the biggest change in functionality of products considering the aforementioned affordances is the core functionality of the products themselves. When products become more dynamic in what they can sense and show to the user and this dynamicity is software-driven, products can be reprogrammed to have new functionalities comparable to how a smartphone attains new functionality with a new ‘app’. For example, a computer mouse could show two buttons and be used as a mouse, or show more buttons and be used as a remote. Alternatively, it could show a number and be used as a digital measuring device.

Segregation of Systems In the envisioned products control features are no longer mechanical parts that have to be incorporated in the housing but are formed by digitally controlled features on the FSI. Consequently, all the electronic parts within the product can be segregated together as long as they can be connected to the display. This results in products that are easier to dissassemble, but also in more form freedom for the products shell shape.

Conclusion The affordances created by FSI features are paramount and mainly revolve around the possibility to change visual graphical elements, digitally defined control elements and touch recognition on the product surface. These afforances offer a strong argument for the development of FSI’s from a user interaction perspective.

37

9. Vision 9.4 Steps towards the realization of FSI’s As with all innovations, the envisioned freeform surface display features will not develop overnight. This paragraph is an attempt to predict which steps the development towards FSI’s will take, in terms of technologies enabling them and the affordances offered by the technological developments. (Fig. 34)

Technological milestones Currently flat touchscreen technology has matured after the coming of LCD displays and LED displays coupled with various techniques to make them touch-responsive. In recent years Organic LED (OLED) screens have begun to appear in consumer products, and research is being done towards making them bendable and strechable. [34]

The affordances discussed in the previous paragraph will gradually be made possible when enabling technologies advance towards the envisioned interface feature. While most affordances will be attainable within the first generations of additivie manufacturing technologies, other will require more extensive maturation of the technology. In the table below, expectations for each of the affordances are listed as dependant on three areas of development: Resolution, colour range and touch capability. Of these three, resolution seems to be the most critical.

Required Level of Development

Affordance

Resolution

Colour Range

Touch Capability

Medium-High

Low

Low

High

High

Low

User tailoring of product features

Medium-High

Low

Low

However, these screens are developed as separate parts and therefore fit within an assembly production paradigm. It is possible that stretchable OLED screens being incorporated on the surfaces of freeform product shapes will serve as functional freeform dynamic displays. I predict such assemblies will require a high degree of preparation and planning in the manufacturing process, because these screens will need to be built for stretching across specific shapes.

Context aware features

Software based features..

Medium-High

Low

Medium-High

Others have foreseen a rapid assembly paradigm in a transitioning phase between the assembly paradigm and the additive manufacturing paradigm. [35] Products with FSI’s could form by a combination of additively manufactured product shapes, smart generation of conductor paths and a pick-and-place process that incorporates separate LED elements. [36]

Consolidation of products

High

Medium

High

Smart sensing

Low

Low

Medium-High

System segregation

Low

Low

Low

In light of the coming additive manufacturing paradigm, it is imaginable that these technologies offer simpler methods to manufacture freeform surface displays. The aforementioned recent research paper by Disney Research shows this convincingly by producing printed lightguide systems that adequately serves as a waveguide. [28] Additionally, high resolution technologies like Optomec’s LENS system can print microscale electronics [37]. Such technologies could perhaps, in the future, be expected to print functioning LEDs onto build shapes and thus create the envisioned freeform displays. Experiments have already seen successfully printed OLEDs. [38]

38

Steps toward envisioned affordances

Dynamic product esthetics

Fig. 33: Table containing overview of the required level of development required before the envisioned affordances can expected to be realized.

Conclusions While flexible OLED technologies from the current assembly paradigm offer possibilities for the creation of FSI’s it seems likely that additive manufacturing technologies will enable cheap and customizable FSI’s in the near future. In development efforts the priority should be given to attaining high resolution displays as this has the greatest benefit for enabling novel affordances.

Milestones towards curved dynamic surface displays

Additive manufacturing Paradigm Printed lightsource surfaces

Rapid assembly Paradigm Additive manufactured Lightguide displays

Embedded Surface LED printing

Assembly Paradigm

Rigid LED&LCD Displays

2010

Bendable OLEDs

Strechable OLEDs

2015

2020+

Fig. 34: Timeline containing estimated emergence of steps towards the realization of FSI’s. The steps are listed in their respective paradigms.

39

Chapter

40

Conceptualization

Now that I’ve formulated the vision, I develop three concepts of products incorporated showing how it would fit the design of remotes, dashboards and alarm clocks. One of these concepts will be the basis for the proof of concept at the end of the project.

41

10. Choice of Concept Directions 10.3 The previous chapter envisioned how freeform surface interfaces could emerge in consumer products. In this chapter I choose three consumer products on which the vision will be applied to demonstrate how those products would be influenced by redesigning them with freeform surface interfaces.

10.1 Criteria for finding CONCEPT DIRETIONS The criteria should select for products that are able to demonstrate the added benefits of freeform surface displays.

1) Is this product’s form determined largely by its display components? My expectations are that FSI’s give designers more form freedom. In a product where form is closely tied to function this freedom might result in an increase in functionality.

2) Does the product have many interface elements? FSI’s promise context-driven rich interfaces, which is demonstrated by taking products where the current interface is weaker and overloaded.

Chosen CONCEPT directions

From the directions that scored high on all criteria I chose the three that most interested me. The chosen directions are in vastly different product categories to consider the potential of lightguides in a broad scope.

Alarm clock Alarm clocks are personal products that we hate every morning. They often have multiple functions for waking you. Radio functions are commonly built in, but some clocks have lamps and calenders. This makes using the clocks require a large amount of interactive elements to set up and use.

Remotes Remotes are small and optimized products used for controlling a large host of devices. While most remotes are currently flat, the use of freeform interfaces might present new possiblities for designing remotes to be more intuitive and flexible.

Dashboards Dashboards are a very style-sensitive element in many cars. They overload us with a huge amount of information that is not always relevant. The use of FSI’s might allow for the integration of these elements in steering wheels or other curved dashboard features.

3) Is this a personal product? To demonstrate the possibility of customization of product form and user-tailoring of interactive features that FSI’s offer it makes sense to choose a personal product that users will want to tweak to their tastes.

10.2 Product Brainstorm About 40 products were generated and tested against the aforementioned criteria on the basis of intuition. Fig. 35: Examples of the three chosen product directions. An alarm clock (l), a television remote (m), and a dashboard section (r)

42

Fig. 36: The mindmap generated to choose three products from. The encircled numbers correspond a product meeting the criteria described in 10.1

43

11. Analysis: Remotes In order to design remotes that implement the vision, I need to gain an understanding of currently designed remotes by answering the question: What are remotes, and problems should I expect to encounter in designing them?

Different methods are used to communicate signals with the controllable appliance. Most remotes use blinking infrared LED’s to convey blinking patterns to a receiver on the controllable appliance. Other remotes use radio frequency waves to convey signals.[39]

11.1 Product description A remote control is a small handheld device to wirelessly control consumer appliances from a short distance. They are typically used to control functions on televisions, audio sets, media players and home cinema sets, but are also used in game consoles, security systems, air conditioning etc.

Fig. 38: The fact that every device comes with its own remote is a problem for users (top). A wireless keyboard can be considered a remote (left). An example of a universal remote with an integrated display (right)

Early remotes used less reliable techniques, such as ultrasonic frequency and lightbeam signallilng.The first remote in history was invented by Nikola Tesla in 1898 and used radio waves.[40] Fig. 37: Various examples of television remotes

Remotes were born out of a desire for comfort. They allow users to sit back and control functions on a device from a distance, without having to move to the device physically.

44

Most devices come shipped with their own remote which is made to communicate specifically with that device or a family of devices from the same manufacturer. This has spawned a need for so called universal remotes which aim to control a wide variety of devices from different manufacturers.

11.2 Problem definition Optimization of size, energy usage and functionality Remotes are handheld devices which we expect to function whenever we need them. Due to this, they are made so they last a long time on batteries so that they do not have to be recharged. As a handheld device, the remote is expected to be compact. However, it is still expected to control all possible functions of its parent device. This leads to the use of many small buttons on many types of remotes. The small button size means not much information can be presented on the button to allow the user to understand what the button does. This is a problem as the functions controlled by the remote are often complex.

11.3 Conclusions From the previous paragraphs, we’ve seen that remotes are small devices heavy with control elements that serve to communicate with one or more parent devices. For the development of the remote concept, several things should be kept in mind: •

The use context of a remote is heavily dependent on that of its parent device



Control featuers typical for remotes are push bottuns, radio buttons, toggle buttons and sliders.



Remotes are operated with the hands, and as such the capabilities of the human hand are important to keep in mind.

Static controls Due to the optimizations from previous paragraph, remote buttons often link to static functions on the host device. When the context of use varies, the remotes buttons stay the same, even though they might confuse the user or not be usable at all because their function does not fit with the current context. Most remotes only transfer information to its host device, and do not receive context information from the host device to make their control elements more context-driven. Remotes are generally tied to a specific device and made to work specifically with that device. If the remote is lost or broken, it often becomes hard or impossible for the user to interact with the host device or perform certain functions. In the inverse situation, when the host device is lost or broken, the remote uses its purpose.

45

12. Analysis: Alarm Clocks Now that I’ve looked at remotes, its time to do the same for alarm clocks: What makes them tick, and what problems do I need to keep in mind when designing the alarm clock concept?

12.1 Product description An alarm clock is a small device to produce alarming sounds when user-specified clock times are reached. They are typically used to alarm users to wake up in the morning at specified times so as not to sleep late. Such alarm clocks are stationary devices mostly found in the bedroom. Other types of alarm clocks include hand-wound ones such as egg-timers, used to time cooking proces steps, and stopwatches for timekeeping during sports and other activities. (Note: Such types will not be further considered for the concept, and any further mention of alarm clock will be taken to mean a device used for waking up.) In origine waking up occured mostly without help of a device or system due to the lack of articificial light-sources restricting the possibility of daytime activity. The human body posesses its own internal system for waking up at the right time using a cycle system of hormones influenced by the ambient light intensity and other factors. [41]

With the coming of microelectronics such systems were largely replaced by electronical systems using quartz oscillator clocks for time keeping and cone speakers or piezo speakers to produce sound. Such systems slowly incorporated other functions such as radio functions, calendar keeping and weather indication. In the past decade mobile phones are increasingly being used as alarm clocks, having preinstalled software to perform an alarm clock function. In s om e a l a r m c l o c k s , l i g ht i s u s e d t o a i d i n w a k i n g up t h e u s e r, stimulating cortisol production and aiding the user in waking up [41]. In alarm clocks for people who have a hearing disability, flashing lights and vibrations are used. [42]

12.2 Problem Definition We have a love-hate relationship with our alarm clocks: We absolutely require them to wake us up at times we wouldn’t naturally wake up, but when they do, we hate them for it. Well, at least some of us do! At the evening the interaction is careful and precise, and a user makes sure to wake up at the right time so not to sleep late. However, in the morning the interaction is more coarse: many alarm clocks feature a big snooze button that can be carelessly slammed to let the clock give the user a few more minutes of precious sleep.

Fig. 39: Waking up through the ages: A crowing rooster common on farms (l). A mechanical alarm clock (m). An alarm clock app on a smartphone (r)

The first modern alarm clocks incorporated hammer-and-bell systems in hand-wound spring clocks, and such clocks can still be found in use today. (Fig. 39)

46

Modern alarm clocks have additional functionalities, such as lighting, radio, a calendar or even a weather indication. This increase of functionalities came at a price of requiring more control elements and display elements in order to support these functions. Its clear from looking at the examples that esthetics are important for alarm clocks, but that they are constrained in their shape by the prescence of their displays and control features. Lastly, alarm clcoks are very personal products. Waking up is a different ritual for different

12.3 Conclusions To conclude: Alarm clocks are devices that we use for waking up in the morning at a time of our choosing. The following should be kept in mind during development of the alarm clock concept: •

Going to bed and waking up are distinctly different contexts



The concept should adress the plurality of functions that alarm clocks have and offer a dynamic solutions for controlling these features



Waking up is a personal experience, and alarm clocks should offer users the possibility to personalize them

Fig. 40: An assortment of alarm clocks. Note how all of them have a flat face to accommodate their display.

users. To this end, some alarm clocks allow you to wake up to different tunes, to your own music, or radio, and to set multiple alarm clocks. However such personalization is limited by what the clock can offer.

47

13. Analysis: Dashboards In the previous chapters I’ve taken a look at Alarm clocks and Remotes. In this chapter I find out what dashboards are and what problems their designers cope with.

13.1 Product Description Dashboards are control panels in cars and other vehicles that serve as a mounting place for the various controls and instruments that are present in automobiles. The earliest dashboards had just a few of the instruments that modern cars possess today. The Model T-fords had an Ammeter and could be fitted with a spedometer as an extra option. By the 1950’s cars had dashboards that were designed to fit the instruments into the design of the rest of the dashboard, and by the end of the 1980’s the first cars had digital displays in them. Fig. 42: A modern Volkswagen dashboard.

Dashboards in modern cars are intricate assemblies comprising dozens of buttons, dials, air-vents and safety measures. The styling of these dashboard is carefully adjusted to that of the rest of the interior and the car itself.

13.2 problem definition

Studies show the majority of road accidents is caused by driver innattention [43], and this makes the practice of putting more control elements outside of the main path of vision questionable. Dashboards also have a very high esthetic and ergonomic functions, as they are the primary environment that a driver will deal with during every hour of driving. Fig.

41:

dashboard

A

classic (top).

T-Ford A

50’s

Driving safety is a big issue. Because the control features dashboard showing a more on dashboards are the user’s main method of gleaning integrated style (bottom) information about its internal state, the design of the dashboard is much influenced by safety concerns. The most important gages, such as fuel level, speed, oil level and revolutions are placed directly in front of the driver, often beneath a sunshield. However, the number of product features has grown drastically in recent decades, and this has forced designers to place these features on the centre console or place them on the steering wheel itself.

48

Regardless of this, dashboard features are predominantly static and context-unaware, regardless of the intentions of the driver or the location he/she is driving the car. Furthermore, the steering wheel, being perhaps the most significant control feature in the entire car, routinely blocks view of the most important instruments when the wheel is being turned.

13.3 Conclusion To summarize, dashboards are control panels in automobiles and other vehicles that contain a high amount of controls and display features. During the development of the dashboard concept, the following should be kept in mind: •

Driving safety is an important consideration. The concept should aim to minimize distraction.



Its important to look at how different functionalities can be controlled in the same location.

Fig. 43: Two illustrations of the complexity of dashboards. On the left, an exploded view of parts in a classic car. On the right, a car interior with the dashboard removed, exposing the sheer number of electrical wires that need to be connected to the dashboard.

Its clear that dashboards are complex things to design from a user-interaction perspective. However, the assembly of dashboards is even more complex. Modern dashboards contain hundreds of parts, comprising of many subassemblies. All the control features of the car need to be wired to the onboard electronics. The airvents need to be connected to the climate control system. Meanwhile these connections have to be hidden from the driver. In some cases, this means that changing an air filter requires the entire dashboard to be taken out.

49

14. Concept Development The past chapters analyzed the product directions for key issues that have to be kept in mind when designing these products. In this chapter the affordances from the vision are applied to each of the product categories. The results are roughly fleshed-out concepts that will be compared to make a choice for a proof-of-concept prototype.

14.1 Idea sketching To generate ideas for various concepts quick sketches were made without adhering to a specific methodology, but keeping the analysis for each of the concepts in mind. These sketches can be found in Appendix F. During this process it became quickly apparant that each direction had different aspects that tied well into the vision. Namely: •

Remotes offer much to explore in terms of gesture recognition and smart ergonomic control features



Dashboards offer a high reduction of visual information through displaying dynamic information and smart sensing



Alarm clocks offer new functionalities unavailable in current alarm clocks.

14.2 Hand Models Using hand-moldable wax hand models were contstructed to stimulate thinking about ergonomics and possible concept shapes. Photos of this process can be found in Appendix G

14.3 Brainstorm Session Two quick ad-hoc brainstorm sessions were held with a small group of students. They were asked to think about the three concept directions and brainstorm about in what ways Freeform Surface Interfaces could be applied to them. Photos of these sessions can be found in Appendix H

50

14.4 Application of affordances The results from the idea sketches were taken and combined with the affordances as described in the vision to form impressionist concepts that are described in terms of affordances offered. These concepts will be described in the following paragraphs.

14.5 REMOTES Context aware features Remotes have many control features that are relevant during very specific use contexts. For example: The on-off button is the only button worth pressing when the parent device is off. Similarly, buttons for recording playback and skipping are only useful when something is being recorded or played back.

Dynamic product esthetics Remotes can change colour to fit a specific user’s tastes, but could also adapt to environment colours, time of day, or the state of the parent device. Buttons could change size, shape and colour.

User Tailored Product Features As previously demonstrated in the vision, the buttons on a remote could be strongly tied to the desires of the user. Not all users use every function on a device, and some users could have very specific user requirements. The two examples given here are a very simple and large-size layout for elderly people and an advanced layout for a football fan that has buttons for showing the game from different perspectives or different players.

Software-based features in physical domain As remotes are almost exclusively used as controllers and are operated by hands, freeform surface interfaces offer most when the shape of the remote fits with the hand and when the software makes most out of adapting features to the position of the users fingers and recognizing gestures. An important possibility is the use of hierarchy between buttons. A finger could control a ‘driver’button that determines the functionality of other fingers, thereby offering a very quick and intuitive interface that requires minimal effort to control.

Consolidation of products Since remotes don’t have have physical functionality apart from being suited to the human hand the software functionalities they could possess are highly analogous to mobile phones. Consequently, it is conceivable that high-resolution remotes possess the capability of consolidating many of the same products that modern smartphones have consolidated. This is supported by many remote apps existing on smartphones [44]

51

14. Concept Development Smart sensing In remotes the sensing software could use obscurement of the remote to deduce the way the user is gripping the remote. Consequently the way the user obscures the remote could be used as a control feature. For example, covering a sphere with both hands could turn the parent device on or off.

14.6 ALARM CLOCKS Context aware features Alarm clocks function in three important contexts: Evenign context, midnight context and morning context. In current alarm clocks the first and last are already often recognized as the buttons for setting an alarm are often small, while snooze buttons are much larger. In evening context, the user wants to carefully set the right time to wake up in order not to be late to whatever appointment he/she has. The interaction here is similar to a technician carefully setting a volume slider and the alarm clock would accomodate this by presenting the

Aditionally, once the remote has established how the user grips the object, the change of readings can be used to deduce fingers leaving and retouching the surface, or fingers changing position across the surface, offering tap functionality and slide functionality, respectively. Furthermore, specific patterns occur when the user waves a number of fingers across a surface. Two fingers siping past a surface will generate a light-dark-light-dark-light pattern in pixels being obscured by the shadow of the fingers. This type of information can be used to convey directionality, speed and number in one gesture and allows for highly intuitive interaction.

52

user with a slider. In the morning, the user is groggy from sleep and either wants to dismiss the alarm, or slam the clock like a piece of dough to have a few more minutes of precious sleep. The clock would here offer the user a small button and a big button, respectively. In midnight, a glance at the clock is a measureent of the remaining time, like seeing how much milk there is left in a measure. Here the clock would offer a progress bar to see how many hours of sleep there woudl be left.

Consolidation of Products Current alarm clocks already consolidate lamps, radios and some have calender indication functions. Other products I invision being consolidate are weather sensors, calendars and music docking stations.

Dynamic Product Esthetics In alarm clocks the clock itself could change color to fit with the esthetics of the room and change typeface to match with a desired style.

User Tailoring of Product Features While normally showing a digital clock, the alarm clock could accomodate young children who can’t yet read clocks with an animation that shows the moon and the sun to indicate day and night. For elderly users that are more used to them, an analog clock could be simulated.

Software Features in Physical Domain Alarm clocks have a relatively large surface for displaying information and often posess a speaker system. Many different types of software could be downloaded to represent new functionalities. As an example, the alarm clock could be used to download and read aloud a bed time stories, presenting the characters talking in the story on its display.

53

14. Concept Development Smart Sensing Alarm could sense the amount of fingers on the surface to infer user intention. When the user touches the surface with one finger, this could be met with the presentation of a slider to set an alarm or current time. When the user squeezes with two fingers, this could be used to set light intensity. Furthermore, to ensure that the user properly wakes up, the clock could demand that all ten fingers are touching the alarm clock in a specific way to turn off the alarm to aid people who have difficulty getting up in the morning.

demand user attention, application of context aware features mean that those featuers that are more relevant to the user can then be presented in placer where the user can most comfortably glance or use them without interrupting his/her driving too much. Three contexts imagined are the standstill context, the urban context and the cruise context. In a standstill context the user is presented with a button to start the car, an overview of remaining fuel and a navigation application. While driving through the city the dashboard changes to urban context mode, and presents the user with a speed dial, a street map and could point out park zones and pedestrian passage points. Once the car has entered the freeway the dashboard would change to a cruise context in which buttons for cruise control are presented and the vehicle’s speed is projected just below the window. The middle console would show exits, fuel level and traffic jams.

14.7 DASHBOARDS Context aware features As cars are personal means of transportation, the use context of the car, and consequently the dashboard, changes constantly. As dashboards have many different control features that can all

54

Dynamic product esthetics The dashboard could change colour depending on who is drying to car, to make the driving

experience more personalized. Other elements such as fonts, button shapes could change as well, but since safety is a large issue it could not always be preferrable.

products rather than one specific products. Freeform surface interfaces do not neccessarily offer much opportunity for integrating separate products, but instead offer a chance to integrate the many subsystems that currently exist in dashboards.

Software Based Features in Physical Domain Many different software features could be added to the car by firmware upgrades. Two examples that spring to mind are installing a custom means of security, such as a swipe-to-unlock function, or apps on the middle console for navigation, weather and other services.

Consolidation of products Many cards today have already integrated products that were sold as separate products, such as radios and navigation systems. As such, dashboard are better viewed as an ecosystem of

55

14. Concept Development 14.8 Conclusions and Observations During the development of the aforementioned concepts, I made the following observations: •

Areas that serve only the purpose of sensing or displaying might be better served with conventional methods. Consequently, FSI’s work best in objects that aren’t always held the same way



When the product changes layouts the user needs to understand the context and functionality change.



Designing functionality of products containing FSI’s requires a combination of user interface design and ergonomics (cognitive & physical).



Form follows function. What form remains when functionality becomes software based?



FSI’s offer most in products that have a shape due to some type of functional constraint.

14.9 Choice of concept The concepts were relatively scored based on two criteria:

Smart Sensing Sensing where the user places his/her hands is useful for creating a safer car. Many people control their steering wheels with just a few fingers or even a single finger, and such practices are not recommended by driving instructors [45] . The car could alert the user of such behavior. But smart sensing could also applied to make the driving experienced more comfortable for the user. For example, when the user touches the gear shifter the car could automatically clutch. Furthermore, control features could dynamically adjust to fit around the user’s hands, always presenting the most relevant buttons within reach.

1. How well does the concept demonstrate the affordances described in the vision? 2. What concept is best fit for creating using lightguide-displays?

Fit with affordances Context aware featrures: Dashboards clearly have most functionalities integrated in one environment. I can easily see more contexts developed for dashboads, while alarm clocks seem to need additional functionalities. Remotes score last because while I realize that on the parent device contexts change all the time, the product itself is often in the same context. Dynamic product esthetics: For remotes, I don’t see the relevance, as they serve mostly as a control element, and the esthetics mainly concern the parent device and how it fits with the interior - I don’t see remotes changing appearance when the parent devices don’t. That being said, multi-device remotes would benefit from dynamic esthetics for showing which device they are controlling. Alarm clocks are very personal devices and seem likely to be customized to a

56

users tastes. Car interiors fall last - I feel that while they may be customized, this would never go beyond fitting with the overall style of the car. User tailoring of product features: For dashboards, the concept has long-duration interactions that are highly personal and offer new comforts and safety features by tailoring to specific users. For remotes, I can see people use different functions on the parent device very clearly, and would recognize that not all users want the same features on their remote. Lastly, in alarm clocks the actual interaction is very short and I don’t see much value in tailoring these features to fit users.

the affordance roadmap in section 9.4. However, since the scoring is relative, this did not change the outcome significantly. Therefore dashboards seems the best concept to elaborate into a proof of concept prototype. The length of printable tubes has to be kept in mind as a challenge.

Software based features in physical domain: Remotes are flat, and therefore offer much functionality that smartphones offer as well, in the flat domain, and therefore FSI’s don’t offer much new. Cars are a clear winner as they have many curved surfaces and a very demanding physical control interface in the form of the steering wheel that I can see embodying many personalized apps for increasing the driving experience. Consolidation of products: Remotes speak to the imagination as they have much in common with smartphones. I don’t see the Alarm clock consolidating any more products in the bedroom, and many products that could be consolidated, such as radio’s and lighting are already in current alarm clocks. Smart sensing: Dashboards have sensing features adding new ways of providing a safer car and presenting controls where they are needed, therefore I consider them a clear winner. Remotes are a good second place as the idea phase showed many interesting ways in which new manual interactions could be embodied. System segregation: There is clearly a possibility for radical change in the way dashboards are built up. Remotes don’t benefit so much, as paths for lightguides will be relatively short.

Fit with lightguide displays The Dashboard is clearly the most challenging with a large rotating part, and is most difficult to achieve with the current state of printed lightguide displays. Remotes should be much more achievable as the paths will be relatively short.

Conclusion Dashboards clearly demonstrate the most of the envisioned affordances, but are a challenge to embody with lightguide displays. A second scoring was made with weight factors based upon

Fig. 44: An assortment of alarm clocks. Note how all of them have a flat face to accommodate their display.

57

Chapter

58

Materialization

While freeform surface interfaces might look good on paper at this point, an important question has been left unadressed: How are they to be created? The following chapters detail the technical background of lightguide displays and shows how lightguides can be used to create display and sensing functionality in a working prototype.

59

15. LEDs Leds are used in many current displays. In researching the possibilities of creating a lightguide display, I wondered whether LEDS were the best choice as a generator of light. Consequently, the possibility of using LEDs as light sources in a Freeform Surface Interface had to be researched. This chapter tries to answer whether LEDs are a good lightsource for such displays.

15.1 Working principle LED stands for Light Emitting Diode. Diodes are semiconductor components made up of two types of semi-conductor material. Such materials have conductive properties that lie between a conductor and an insulator. By inserting imperfection in semiconductors, a process called doping, the materials can be made to be either one of two types: 1. n-type materials, which have an excess of electrons. 2. p-type materials, which have a lack of electrons, or ‘holes’. While the material does not have free-moving electrons that can be found in conductor materials, the excess or lack of electrons can move among the molecules similar to the gap in a sliding puzzle. [46] When the holes and electrons meet at the interface between n-type and p-type material in a diode, they form an equilibrium (Fig. 45r) so that a stable zone exists between both types of materials. To overcome this equilibrium, a voltage potential is needed.

As a result, a diode does not freely transmit current until a certain voltage is reached. However, when an en electron finds a hole and drops to a lower energy state, a portion of the difference in energy is emitted as photons, or light. Due to the specific structure of the electron bands in the material, different light spectra are emitted by different materials. Putting a reverse potential on a diode leads to electrons being immediately joining with the holes at the p-type material. When sufficient voltage is applied in this manner the diode may become damaged. [47]

15.2 Types of Leds Diodes are generally encased in a polymer to shield the semiconductor from contact with air . The color of light that the LEDs emit depends on the type of semiconductor material. While the first LED invented emitted red light, during the years more materials have been developed and now a wide range of visible, infrared and ultraviolet light is possible Some leds are multicolour, either through properties of the semiconductor or by casting multiple LEDs into a single casing. During the years, increasingly powerful LEDs have been developed. Higher power LEDS tend to overheat, so special casings have to be developed.[48] Organic LEDs (OLEDS) are a relatively new type of LED that are built up from polymers. These can be used in flexible displays [49] and have been shown to be printable [50] LEDS are often employed in a matrix formation (Fig. 46) in which all LEDS of a single row or column are connected by their anode or cathode, respectively. Such arrays are driven by

Fig. 45: Schematic drawing of a common dome-type LED (l). To the right, a schematic

60

illustrating the working principle of LEDS.

Fig. 46: (l) An RGB LED matrix showing different colours. (r): An alphanumeric LED matrix.

1. A LED will generate a small voltage potential when light hits it similar to a photovoltaic cell[53]. This voltage potential can be amplified and used to detect incident light. An example of a matrix sensing using this method is in Figure 48. 2. By reverse-biasing a LED, which means feeding it a voltage potential in the opposite direction of normal functioning, a charge builds up between the anode and the cathode of the LED. When the voltage potential is removed, this charge will decharge similar to a capacitor. The LED will decharge faster when light of equal or lower wavelength than it emits hits the semiconductor.[54] Fig. 47: An assortment of LEDs. Note that the housing is mostly indicative for human recognition, and that the colour of light is determined by the material of the LED.

very quickly alternating voltages across rows and colums, lighting few leds at the same time. However, since this process happens at very high frequencies, the human eye sees the entire matrix as working simultaneously. Sometimes the casing is lens shaped such that the light exits it in a certain shape, and this is used to create alphanumeric displays. So-called ‘RGB’-LEDs or multicolour-LEDs contain multiple diodes in one housing, and often have a common cathode. Fig. 48: Multi-touch sensing on a LED matrix. [55]

15.3 Applications LEDs are used in a wide array of applications. They are superior to other means of lighting due to their long lifespan, low energy consumption and robustness [51]. Several examples of use are as a signal producer in fiber optics communications, lighting, displays, signalling, and light detection.

15.4 LEDS as sensors LEDs are capable as acting as light sensors, and are sensitive to wavelengths equal to and shorter than they emit. [52]

Its important to note that LEDs can alternate between normal functionality and act as lightsensors at higher frequencies than the human eye can percieve. (Meaning the human eye will percieve a constantly burning LED).

15.5 Conclusions LEDS are a matured, widely used lighting solution found in many products today. There is a good variety of them offered and they offer sensing capabilities in two ways. They are already heavily used in fiber optic communication for both emitting and sensing, and therefore seem the perfect choice as a lightsource and lightsensor in printed lightguide displays

Two methods for using LEDs as light sensors exist:

61

16. Touch Sensing & Machine Vision As discussed, for the creation of freeform interfaces, the creation of multi-touch-sensitivity is a necessity. This chapter details what forms of optical touch sensing there exist for freeform surfaces and how they work. While it is far from exhaustive [56], it shows several major mechanisms.

16.1 drawbacks of optical systems Every touch-sensing mechanism has its drawbacks. By relying solely on light to infer human touch, this means that something that looks like a human finger or hand will be hard to discern from an actual hand or finger. For example, a system that only looks at dark spots to infer the prescence of a finger, could also be controlled using a pen. When such interaction is intentional by the user, then such side-effects cannot be said to cause any harm. However, when a user doesn’t have the intention of controlling the interface but the interface thinks it is being controlled intentionally, then that clearly makes the interface unreliable.

Another way of sensing hands using infrared light is by looking at the infrared light that is continously emitted by a user’s hands, as heat radiation is infrared light. However, this requires there to be a significant heat difference between the user’s hands and the surrounding air, and this is often not the case.

16.3 Frustrated total internal reflection A widely known but relatively unapplied method of multi touch sensing is through frustrated total internal reflection (FTIR), a process that is further discussed in the fiber optics chapter in this thesis. Light from a lightsource is sent to internally reflect with the touch surface boundary. As the user places his finger upon the boundary surface,

Fig. 49: Left: A patent describing a frustrated total internal reflection touch-sensing (FTIR) method. Right: An FTIR solution developed at New York University [58]

Fig. 49: The Sphere project by Microsoft Research detects hands using infrared reflections.

16.2 Infrared Projection sensing The Sphere project by Microsoft Research shows a multi-touch interface through internal projection in a hollow sphere [57]. Touch sensing is done by capturing the reflection of an infrared lightsource inside the sphere. The infrared light is not visible by the user, but is emitted through the surface of the sphere and reflects off hands and other objects. A computer algorithm processes the shape of these reflections and determines whether they are human hands or not.

62

the light is no longer totally internally reflected and leaves the surface, due to the human finger having a different refractive index than air. The finger is thus said to ‘frustrate’ the TIR. A camera or other light sensitive element is set up to measure the reflecting light, and when light from a specific source fails to be detected, it is an indication that the user has touched the surface at the point where that light would normally reflect. The viability of this technique is demonstrated extensively, and it has been shown to work in printed prototypes by Disney Research [28]

16.4 Occlusion Another often employed technique is registering lightpaths being occluded by the human finger. This requires a grid of both sensor elements and emittive elements to be placed around the area where touch is to be be sensed. Often infrared light is used. [56] The major drawbacks of this technique are that it requires the light sources and sensor elements to be in a straight line with the human finger, and this severely limits shape constraints.

Such methods usually try to find contrast differences through edge detection. The shape of these edges can then be analyzed and mapped to some internal logic or library of shapes to allow the computer to recognize shapes. This is why QR-codes and other type of markers feature high-contrast rectangular graphics: they are easier to detect. Fig. 50: Zerotouch occlusion-based interface. [56]

16.5 Machine vision The field of Computer Vision or Machine Vision studies the recognition of objects by processing digitally captured images using software algorithms. [59]

Computer vision software needs to account for several factors, such as lens distortion and perspective distortion. Especialy when the computer is to track motion paths of objects this distortion can become significant when a camera picture is all that is relied on. Another factor is the lighting conditions surrounding the digital eye and the things to be monitored. In very dim or bright environments imaged pixels can become sufficiently close together to result in unreliable edge detection, especially when shadows from other objects in the vicinity are cast over the object that is to be detected. Because of these difficulties, often active markers are used [60], but these are unreasonable to expect in a consumer-product interaction.

16.6 Conclusions From the examples discussed we can clearly see there are many possibilities for sensing touch interaction once some room for error is allowed in picking up touch from objects that are not fingers. For a more consistent method of touch sensing, other technologies will have to be considered, but these will most likely rely on pressure sensitivity and/or electrical sensitivity. While its not clear whether the methods discussed can be applied in lightguide displays, elements of these solutions will be useful to consider in developing a touch sensing mechanism in lightguide interfaces.

Fig. 51: Left: Illustration of edge detection in computer vision. Right: Typical marker pattern optimized for recognition with edge-detection methods.

63

17. Fiber Optics The use of transparant structures to conduct light from one point to another is not new. Fiber optics have been used for over sixty years [61], and the f unc t ioning pr inciple b ehind t hem has b e en k now n as e arly as 1840.

light hitting the boundary will continue parallel to the boundary. This angle is called the critical angle (Fig. 52), because all light that has a larger incident angle than the critical angle does not cross the boundary but is totally reflected. This is called total internal reflection.

Currently, fiber optics are primarily used to transfer information across the world to achieve high speed network connections and phone connections. They have also been used for lossless transfer of digital audio, video, and other signals.

17.1 Total internal reflection When light travels from one transparant medium to another, as discussed in Appendix A, it wil refract. Snell’s law describes how the propagation direction of light will change depending on the refractive indices of both materials, and the angle of incidence of light with respect to the boundary.

Fig. 53: Total internal reflection in a transparant material, demonstrated using a laser pointer.

17.2 Propagation In this formula, θ1 and θ2 are the angle of incidence and the angle of refraction, respectively. n2 and n1 are the refractive indices in both materials, and v1 and v2 are the speed of light in those materials.

The principle of total internal reflection is used in fiber optics to keep lightrays trapped within a fiber for very long distances.

By taking the angle of refraction to be 90 degrees, an angle of incidence can be found at which

Fig. 54: The acceptance angle of an optical fiber depends upon the critical Fig. 52: Left: A refracted lightray. Middle: A light ray at the critical angle. Right: A totally internally reflected ray. Note the angle for total internal reflection is larger than the critical angle.

64

angle between its core and cladding.

Fiber optics are made up of a core material that transmits light and a cladding material of lower refractive index. Lightrays propagating through the core material are totally internally reflect as long as they enter the fiber optic at a certain angle, called the acceptance angle of the optic fiber.

17.3 Modes When light transverses an optic fiber, electromagnetic fields will propagate with it. These can take on several forms (Fig. 55), called modes, depending on the wavelength of the light, the refractive indices of the core and cladding and the dimensions of the core with respect to the cladding.[62]

17.4 Attenuation In an ideal fiber all light that enters one side will exit the other end. However, due to several mechanisms a portion of light is lost along the way [63]. Small fluctuations of material density cause Rayleigh scattering to occur, especially when the size of the particles that make up the material approaches the wavelength of the light traveling through it.

Some modes travel faster than others, posing problems in sending signals across the fibers in datacommunication.

Fig. 56: Example of the attenuation spectrum in an optical fiber

Additionally, impurities in the fiber materials and intrinsic absorptive properties of the core and cladding material cause some of the light to be absorbed and converted into heat.

Fig. 55: Overview of some different lightray modes that can exist in a cilindrical waveguide.

65

17. Fiber Optics 17.5 Types of Optic Fibers The fibers as described in previous paragraphs are step-index multi-mode fibers. This means there is a sharp transition in refractive index between the core and cladding material. Due to the size of the core with respect to the cladding, multiple modes (multiple rays) of light can travel through the core. When the core is made much smaller with respect to the cladding, for certain wavelengths of light only one mode (ray of light) can exist in the waveguide. Such a fiber is called a single mode step-index fiber and it allows for higher bandwith in transmitting, as light pulses are are not ‘smeared out’ due to some modes traveling faster than others. However when these fibers were first conceived there was great trouble in connecting light sources to the very small core size that is required for single-mode fibers.

Fig. 57: Overview of differences between the three main types of optic fibers

Consequently graded index fibers were conceived, which do not have a sharp transition of refractive index between core and cladding, but a gradual one. This has the effect that all modes travel at the same speed due to faster modes transgressing farther into the cladding where the refractive index is higher.[64]

66

17.6 Conclusions •

For Free Surface interface lightguide displays with pixel diameter larger than 200 micron the lightguide can be considered to be a multi-mode step index fiber.



The difference in refractive index between core and cladding is a major factor in determining the critical angle. Maximizing this difference will lead to a decrease in attenuation.



Lightrays have the biggest chance of being internally reflected when the angle between the interface tangent and their direction of propagation is as small as possible.



The previous point implies the core-cladding interface should be as smooth as possible.



The angle of curvature in lightguides should be minimized.



The refractive index of the cladding material should be lower than that of the core material

67

18. Creation of a Lightguide Display We’ve seen in the previous chapters that LEDs seem a good lightsource for a Lightguide display, and we have a better understanding of how lightguides work. In this chapter I build a lightguide display to see whether they are a possibility for creating FSI’s, and try to identify challenges for further development.

18.1 printed lightguide display prototype Goal of the experiment

Fig. 59: A Kingbright LED matrix connected to an Arduino Uno running a test program.

To find out to what extent a system of printed lightguides can transfer imagery from a led matrix to a spherical surface.

an acceptable contrast between pixel and non-pixel areas on the sphere.(Fig 60.) Because the interface between the LEDs and Tubes was inset in the model decent polishing and sanding of the interface was not possible. This affected transmission from the LED’s into the lightguides.

Setup & Process An Arduino microcontroller prototyping board was programmed to display a sequence of symbols on a 7x5 LED matrix. (Fig 59) The LED matrix was inserted into a printed object and kept in place using a snug fit. The source code for this program is listed in Appendix N.

Fig. 58: A lightguide display modeled in Rhinoceros 4.0 using Grasshopper.

The lightguides at the edge of the matrix are more curved than those at the middle. It was clearly noticable that the surface pixels corresponding to these guides were dimmer than those in the middle. This indicates that there is alot of light lost in those guides due to total internal reflection not being achieved in those lightguides. This is in line with the expectations when considering the results of Disney Research.

A CAD model(Fig. 58) was parametrically modeled using the Rhinoceros Grasshopper plugin. The model consists of a shelled half sphere and a series of solid tubes protruding from a recess in the shell. The CAD model was exported to an .STL file format and printed on an Objet Eden 260 printer. The machine printed the geometry specified in VeroClear build material, while the empty space in the model was printed as solid support material. The wall thickness of the sphere shell was set to 1.mm. The diameter of the lightguides was modeled to be 3mm and have a spacing of aproximately 7 mm at the spherical surface and 4.6 mm at the LED matrix interface. The refractive index of VeroClear and

Results The patterns by the LED matrix were somewhat recognizable on the object surface. Additionally, the translucent support material transferred too much light to get

68

Fig. 60: A fully printed lightguide matrix created on an Object Eden 260. The right image shows the LED matrix inserted in the print. The LED light clearly shines through the translucent support material.

18.2 Conclusion The prototype shows promising steps toward a curved surface dynamic display. However, both both the transmittance of the lightguides as the contrast of the pixels with the surrounding surface materials was found to be severely limiting in transferring a convincing image to the object surface. Enhancemets in the image quality will require advancements in both resolution and contrast. Pixel contrast should be improvable by using an opaque material to embed the lightguides and using higher intensity lightsources. Additionally, the efficiency of the whole system will benefit from improving the LEDlightguide interface and increasing transmission efficiency of the lightguides themselves. For the latter a technique needs to be found to generate smooth edges and further improvements might be made by printing a separate cladding around the lightguides with a lower refractive index than the core veroclear material.

Fig. 61: Left: The printed parts for the simulated print prototype. Right: The fully functional simulated print running a demo program.

for polishing of the led interface. All parts were printed on an Objet Eden 260 using VeroBlue material. (Fig 61.)

To increase the resolution of the display a tighter stacking of the surface pixels and reduction of the lightguide diameter is required. Reducing the diameter will however also decrease the transmisson of the lightguides [28] and consequently an optimum between transmission and resolution seems inevitable.

18.3 Simulated print prototype Taking the results from the previous experiment as a starting point, this experiment aims to look at how the same object would perform if the materials printed would have the desired properties.

Goal of the experiment The goal was to explore the workings of a simulated print surface display objects.

Setup & Process The CAD model from the previous experiment was modified to yield a solid shape with tubeshaped holes. The inset for the LED matrix was incorporated into a separate base part to allow

Fig. 62: Left: A fiber inserted into the printed part. Middle: The curved surface, with unpolished fiber ends sticking out. Right: The Fiber-Matrix interface with unpolished fiber sticking out.

After water-jet treatment of the sample, 3mm commerical optic PMMA fiber was inserted into it and snapped off close to the surface. Due to residual support material stuck inside the tube sockets some segments were gently hammered in using a mallet. The segments of PMMA fiber where then removed, covered with ge n e r a l pu r p o s e c ont a c t g lu e f rom t h e H E M A br an d an d re i ns e r t e d .

69

18. Creation of a Lightguide Display After the glue had set both the curved object and the LED-lightguide interface were sanded using sandpaper of decreasing grain size ( 40, 360, 1000) and polished using Unipol (R) multipurpose polish. The base was fastened to the tube-socket part using two m3 bolts, the holes for which were printed and not processed. The LED matrix and its control setup were kept unchanged from the previous experiment and the matrix was inserted in the base using a snug fit.

Results As expected, the simulated print prototype performed better in transferring a clear image from a LED matrix to a curved surface. With the human eye the brightness of the surface pixels was indistinguishible from those of the bare LED matrix.

Fig. 64: Left: Cracks in the lightguides due to hammering and snapping. Right: Clearly visible intensity differences due to a finger obscuring surface pixels.

The majority of light exiting the pixel at the surface clearly travels parrallel to the normal of the pixel, as the pixels were increasingly less visible when viewed at increasing angles with respect to the normal. The PMMA segments cracked fairly easily, probably caused by snapping off or by hammering at the insertion. These cracks did not seem to influence brightnes of the corresponding surface pixels as judged with the human eye. (Fig 64.) Finally, an observation that can be made is that from the LED-interface side, light intensity differences are clearly observable when pixels are covered by a finger.

18.4 Conclusions

Fig. 63: Lit pixels on the working simulated print display.

Contrast between pixel areas and the remaining surface area were much better than in the previous setup as well.

The results show that lightguides with the right properties can adequately transfer an image from an LED matrix to a curved surface. This implies that when AM technologies improve to produce lightpipes nearing the quality of commercial plastic optic fibres, lightguides are a very promising method to produce FSI’s. However, to make the pixels viewable from all angles, they should emit light in all directions . This can easily be achieved by roughing the pixel surfaces to make them reflect diffusely. Lastly, the light differences that can be percieved at the interface side might yield possibilities for making the surface interactive by using light sensors. (Fig. 64)

70

71

19. Touch Sensing on a Lightguide Display In order to show that lightguides can accomodate a touch-sensitive interface, this experiment investigates whether it is possible to use the LEDs used to generate light at the source of the lightguides as sensing elements.

19.1 Experiment: SIngle LIghtguide Sensing Goal of the experiment

phase the led was reverse biased and did not emit light, and thus the surface pixel was dark. In the measuring phase the voltage at the cathode end of the LED was measured twice: Once at the start of the measuring phase, and once at the end of the measuring phase. These two values were compared to find a value representative for the amount of light reaching the LED. The arduino program for this setup can be found in Appendix P. To visualize the results, measurements were read from the arduino via a serial connection and visualized using Processing 1.5.1. The code for the visualization were taken from [65] and slightly modified. It can be found in Appendix O. Different photon integration and lighting times were used, varying between 10 and 40 ms.

The goal of the experiment is to find out whether LED sensing can be used to register touch on a Freeform surface lightguide display pixel.

Setup & Process The simulated print prototype from chapter 18.3 was reused for this experiment. The Arduino Uno microcontroller was programmed to alternate a single LED between emitting light and registering light, as described in 15.4. Each LED was alternated between three phases: Lighting, charging and measuring. In the lighting phase the led was powered normally and emitted light that was easily percievable at the surface pixel. In the Charging

Fig. 65: Schematic overview of a process that uses LEDs to alternate between emitting light and sensing light reversing reverse bias decharging. Fig. 66: Overview of the sensing setup. The visible pixel is both displaying and sensing at the same time.

72

Results The setup measured the light intensity at the surface pixel while simultaneously lighting it. When the length of the entire cycle was around 20ms, the pixel appeared as a continuously lit pixel. Setting the lighting time to 0 and introducing a delay such that the cycle time did not change but the pixel was left dark seemed to have no perceivable influence on sensing capability. At longer cycle times measurement became less sensitive to distortions from 50Hz AC electricity fields in the building. Below are three readings in Figure 67 using the times listed in the table from Figure 68.

Lighting time (ms)

Charge time(ms)

Measuring time(ms)

Visible Flickering

10

0-1

10

No

B

20

0-1

20

Yes

C

40

0-1

40

Yes

A

Fig. 68: Table listing different cycle settings for the measurements in Figure 67. Visible flickering ‘No’ means I perceived the pixel as a constant source of light. Charge time is the the interval that a reverse bias was applied to charge the LED’s cathode.

Readings were clearly influenced by the intensity of ambient light striking the surface. Consequently, holding the prototype at different angles to ambient lightsources changed the average readings of the surface pixel. A finger crossing the pixel at a small distance of about 2-5 cm was perceivable in the readings. Futhermore, the gradual covering and uncovering of a pixel was clearly visible as a slope in the readings (Figure 67, bottom right). The pixel was more sensitive to sunlight than to artificial light sources, but was not tested outdoors. Lastly, Influencing the incident light on neigbouring, electrically unconnected pixels seemed to influence the measured value of the single surface pixel.

Conclusions The results clearly show that sensing light intensity on freeform lightguide interfaces is possible through using LED sensing as discussed in chapter 15. However, the distortion by ambient electromagnetic fields is currently very high. This is especially the case when the cycle times are set such that the pixel burns continuously as percieved through human eyes.

Fig. 67: Detection of obscurement of the pixel in a normally lit office environment. Value represents an A/D converted voltage difference measured on the analog port. Grey area indicates actual finger

Furthermore, as this technique detects ambient light or the lack thereof, fluctuations in environment lighting have a strong effect on the capability of the interface to sense obscuring of pixels consistently. The setup will not work when the level of ambient light is too low or in darkness.

covering pixel. Cycle times: Top left: 40 ms. Top right: 20 ms. Bottom left: 10 ms. Bottom right shows slow covering and decovering of a pixel.

73

19. Touch Sensing on a Lightguide Display Lastly, it should be noted that the technique does not sense whether a human finger or hand touches the surface, but merely when some opaque object prevents light from reaching a pixel. This means that other objects, such as styluses can be used for obscuring pixels, but also that some method needs to be found to distinguish between accidental and intentional obscuring.

19.2 Experiment: Multi-lightguide sensing In this next experiment I attempt to sense with multiple LEDs at the same time to see if I can ‘detect’ the position of a finger.

Goal of the experiment The goal of the experiment is to find out whether LED sensing can be used to register touch on multiple Freeform surface lightguide display pixels.

Fig. 70: Unobscured measurements before

Fig. 71: Same measurements after calibration.

calibration. Grey bars are raw values, Pink lines are highest and lowest percieved values. White bars are normalized values.

Setup & Process The setup from the previous experiment was Fig. 69: The second experiment running. expanded by connecting more LEDs from the matrix to the Arduino board. The values measured by the Arduino board were processed in a Processing 1.5.1 sketch that registered the value for each pixel and normalized for the highest and lowest seen values for each pixel. he normalization was necessary because it was found that pixels differed in sensitivity. (Fig 70 & 71)

Results The software was able to read multiple values from the prototype and showed them in the graph. The movement of a finger or other object across or above the surface registered as a decrease in value at that specific pixel. (Fig 72) Whenever the ambient light conditions suddenly changed, or when all pixels were obscured by a palm, readings changed accordingly across all affected pixels. (Fig 73)

74

Fig. 72: Detection of a single finger. Notice the ‘dip’ in the white bars.



It is impossible to distinguish between real user hands and things that have a similar shape. For example, a pen could be seen as a pinky.

A general observation that can be made is that the size of darkened spots caused by quasispherical fingertips and quasi-cilyndrical fingers is influenced by the curvature of the surface interface as can be seen in the following figure:

Fig. 73: Explanation of the relation between surface curvature and ambient light sensation.

19.3 conclusions Fig. 72: Readings go down when a flat hand approaches the surface. Approoximate distances: A:40 cm, B: 15 cm. C: Fully obscured.

Conclusions The results showed that with the appropriate software and pixel density fingers and hands can be detected using multi-lightguide sensing. However, the system as tested in this experiment shows some drawbacks: •

It has to be calibrated before use. Whether this is with each use and whether this can be automated needs to be further explored.



It is highly sensitive to changes in ambient light strength.

In the past chapters we’ve seen that LEDs can be used for sensing obscurement when this obscurement is done by a human finger in controlled lighting conditions. However, the results also clearly show that the system is very sensitive to changes in ambient lighting conditions. Next to that, sensing cycle times of about 10 ms are ideal for the simultaneous sensing of obscurement as well as emitting light, but in these cycle times the signal/noise ratio is very low. The importance of creating decent detection algorithms that use more advanced methods of normalization and pattern recognition cannot be overstated. Lastly, it should be noted that these experiments have been done on a simulated lightguide display and these experiments should be repeated on a printed lightguide matrix.

75

20. Technological Challenges The experiments in the previous chapters have made de case for lightguide displays plausible.

calculate tube paths so that they fullfill all the constraints needed to act as lightguides.

However, in order to reach the level of sophistication that is presented in the vision, several technical hurdles need to be solved. In this chapter I summarize several investigations that were aimed at demonstrating a feasibility for creating high-fidelity lightguide interfaces in the future.

The conclusion that I could draw with my very limited background in such mechanics was that a method for generating such lightguide paths would probably need to be a combination of path generation and recursive path adjustment. However, after the study I have no doubht that such systems are very possible and that they do not pose a significant hurdle towards realizing complex lightguide displays.

20.1 Computer Aided Design of Freeform Lightguide interfaces The sheer majority of designers use Computer Aided Design (CAD) and Computer Aided Manufacturing (CAM) software to specify product geometries and create instructions for manufacturing. In a world where more and more communication is done digitally, these tools directly extend the capabilities of the designer. If a CAD package is unable to make freeform shapes, the designer will have a much harder time creating the digital files needed for manufacturing. In the modeling of the experimental lightguide prototypes in the previous sections I noticed early on that modeling such structures is challenging. The shapes created in these experiments will admittedly be relatively easy to create for more experienced CAD engineers, but that is nto the point. When lightguides are to approach the level of complexity that is required for creating products as described in my vision, normal CAD solutions - manual modeling of tube geometries - will not be feasible at all. In Appendix H a short study shows what kind of algorithms might be used to automatically

20.2 Printing optically smooth lightguide core-cladding interfaces In the course of this project I have attempted to link process aspects of the Objet printing technology to the attenuation in printed fibers. To this aim, an experiment was carried out to measure the surface roughness of printed Veroclear samples. This experiment can be found in Appendix J and its results in Appendix K. The experiment showed that surfaces printed by the Objet Eden 260 in the Faculty lab at the ver y least contain minor surface roughness features exceeding 10-100 micron, and major surface roughness features of 800 micron. In Appendix L the starting points for a model of attenuation is made. While this model is incomplete, the observations made in this section can explain why there is a large amount of attenuation: If we assume that the major repetitive wave-like surface roughness features of the roughness measurements of the glossy materials in Appendix J are representative for the core-cladding interface profile of printed lightguides, then shadowing effects and secondary reflection effects will cause a large portion of light to be lost through the interface. (Fig. 75) An attempt was made to measure the refractive indices of Objet’s materials, as these are hard to find and Objet is intent on keeping them a secret. This experiment can be found in Appendix M, but unfortunately it was inconclusive.

Fig. 74: Left: A CAD Package. Right: An investigated method for creating tube path using flow simulations.

76

20.3 Conclusion Current polymer jetting technology as used in Objet machines is incapable of producing opticaly smooth core-cladding interfaces in printed lightguide structures, according to the data available and best guesses. Unfortunately my background knowledge in the related subjects is severely lacking. It might be possible to achieve smoother surfaces through influencing the printing process variables, by using differnet printing techniques, or by using a combination of techniques. Fig. 75: Combination of a result from the roughness measurement with the theory from the attenuation model. Its clear that the blue peaks generate a shadowing effect (Red). Consequently, the lightrays (red lines) will almost never reflect off the next peak beneath the critical angle C. Note that this is a 2D

My best guess is that very significant advancements have to be made in this printing technique to enable printing such smooth interfaces an that such advancements will not take place in the near future.

simplification, that the angle C is arbitrarily chosen, and that the height of the peaks is exaggerated due to unequal scales

These indices could have been used to calculate the critical angle in printed lightguides, and this in combination with the roughness measurements could have been used to create a model of attenuation in printed lightguides. As this is not the case the only observation that can be made is the following: If printed lightguides are to improve to the point of current commercial optic fibers, the features size of the surface roughness of the core-cladding interface must be below that of the wavelengths of visible light.[66] This can, to my mind - And this is purely speculative - only be achieved by smoothing the interface through chemical means, or by decreasing the drop size by at least several orders of magnitude and increasing the precision of printing. There should however, be alternative possibilities that I am not aware of.

77

Chapter

78

Evaluation

Now that we’ve seen that lightguide displays offer a way of creating freeform surface interfaces and seen what such interfaces should be capable of conceptually, its time to put it all to the test. This chapter describes the creation of a Proof-of-Concept steering wheel prototype and evaluates the result.

79

21. Proof of Concept Prototype Now that the possibility to create a basic lightguide display is demonstrated, the dashboard concept can be developed into a proof-of-concept prototype demonstrating some of the vision affordances in a working prototype. This chapter details the process steps taken in the embodiment of this Proof of Concept Prototype.

21.1 Goal The Goal is to design a proof of concept prototype of the steering wheel from the dashboard concept which demonstrates the affordances formulated in the vision.

Steps to be taken The prototyping phase consists of the following steps: 1. Decide upon the general layout of the prototype 2. Plan manufacturing steps 3. Order parts



The steering wheel is appropriately sized for a prototype that is to be made in the timeframe available

21.3 Design of the prototype For the design of the steering wheel a classical steering wheel shape was taken and modified. The process was of a holistic nature in that design of the shape had direct consequences for the method of manufacturing. By simple sketches, which can be found in Appendix M, a curved ‘H’ shape was decided upon. To demonstrate segregation of electrical systems and control features, the goal was to leave electric wiring out of the steering wheel as much as possible. This resulted in the printed display, showing current capabilities of lightguide displays, to be placed on top of the steering axle. The simulated print display was placed in the extremities of the steering wheel to emphasize the segregation that such lightguides offer. It deserves mention that the shape employed is found in many concept cars, mirroring a desire for a more futuristic steering wheel among car designers.

4. Assemble prototype 5. Programming & Testing 6. Evaluation

21.2 General layout of the prototype In the prototype the aim was to show both the current capability of printed light guide and the expected future capability. The steering wheel was chosen as a prototype for several reasons:

80



The shape of many steering wheels is such that they are ill-fitted for conventional displays, hence demonstrating the potential of freeform surface interfaces



the sketching and concept phase demonstrated many possibilities for recognizing griprelated interfaces

Fig. 76: Left: Steering wheel concepts in concept cars. Right: Examples of the sketches made.

21.4 Considerations in manufacturing the prototype The method of manufacturing had great consequences for the detailing of the steering wheel. The options considered are listed in Figure 77. After considering these options, only two were considered viable for the budget, time and precision constraints: Foam + Coating or printing on Objet Eden. In the final design, a combination of these methods was applied. Option

Pros

Cons

Vacuumforming

Fast Cheap Continuous body

Tolerances No internal support structure Mold master needed

Foam + Coating

Cheap Continuous body

Time-consuming Imprecise

Printing externally

High accuracy Fits with demonstration Multi-material possiblePrinted (Demonstrative)

Ve r y e x p e n s i v e Delivery time

Printing on Ultimaker

Cheap Printed (Demonstrative)

Anisotropic surface finish Dimensionsional instability Build size

Printing on Objet Eden

Expensive Fast Printed (Demonstrative)

Single material Build platform size

Laser Cutting

Cheap Fast

Requires extensive designing

Modifying existing wheel

Cheap Realistic

Uncertainty of feasibility Reverse-engineering Metal armature in steering wheel

21.5 Embodiment of the steering wheel design The body The body is made up of two segments which tightly fit around the support structure. They were routed from two 35x500x300mm polyurethane foam plates which were of the heaviest quality available (PUR100). To keep the steering wheel as sleek as possible, the foam body very closely fits around the aluminum support frame. This lead to the foam body being as thin as 1mm at some points, but apart from a minor tear this did not result in any problems.The body was modeled in Solidworks and took significant redesigns to fit smoothly around the support

Fig. 78: Left: The PUR foam. Middle: The top part being routed. Right: The finished body parts.

frame while remaining a smooth and sleek shape. Unfortunately the version of Solidworks available lacked support for freeform curve body design and consequently roundings consist mainly of fillets which led to some surfaces being flat despite the intention of making a fully smoothe shape.

Fig. 79: Left: A hand model for testing the dimensinos. Right: Issues with the restrictive Solidworks Fig. 77: Table outlining various possibilities for manfacturing the main body shape

Educational Edition.

81

21. Proof of Concept Prototype The support structure The assembly supporting the foam body consists of eleven parts: Eight standard aluminum U-profiles of 15x15x15x1.5 mm and three custom designed pieces that were printed on the Objet Eden 220 at the university’s faculty. A major aim was to minimize the amount of parts needed as much as possible and keep the structure light. \The shape of the U profiles allowed for easy accessing the inner parts of the support structure to fit the wiring

Fig. 82 Left: The middle connector segment. Right: A section view showing the middle connector in Fig. 80 Left: The connectors being removed from the Printing tray. Right: The aluminium profiles before

the assembly

drilling.

The middle segment served as a fixture for the printed display, led matrices and was made to fit the four longer U-profiles and sit on top of a 30 x 3 mm aluminum tube that serves as a steering axle. The middle segment was kept hollow to allow all the wires to pass through while

Fig. 81 Left: The T-Connector piece. Right: A cutaway view shows the T-connectors hollow structure.

and optical fibers for the simulated print display. The T-connectors were made hollow for the same purpose. While bolting holes were modeled into the custom parts and drilled into the aluminum segments, the fit between parts was found to be tight enough to remove the need for bolting altogether, especially when inserted into the foam body.

82

Fig. 83: Left: The completed support assembly. Right: The assembly fit into the bottom body half.

still maintaining rigidity. The completed support sub-assembly can be seen in Figure 83.

holes could be connected to actual LED’s, but the other holes were filled with small pieces of fiber to create the apperance of a full display. In order to keep the fibers in place while sanding, a 5 mm layer of epoxy resin was cast into that half and allowed to harden. After the resin had hardened, the fibers were sanded and polished flush with the surface of the printed part. Both halves were glued together using quick-hardening polyacrylate glue.

Fig. 84 Left: The Printed Display CAD model. Right: Roughly cleaned printed display illuminated by a High-Intensity Phone flash LED.

The printed display An extension of previous prototyping, the printed matrix was modeled using grasshopper by expanding the files used in previous experiments to a lightguide matrix of 70 pixels (7x10). This saved time and eliminated unwelcome surprises. The model (Fig. 84) was specified in two .STL files and printed at an external prototyping firm on a multi-material connex machine. The outside part (indicated gray) was printed using VeroBlack material. The inner tubing (indicated green) was printed using VeroClean material. A gap of 0.2 mm was kept between these two parts and the machine filled with support material. The printed part was ordered to ship with support layer intact in order to protect the tubing. The model was sanded using 360-grade and 1000-grade sandpaper, however as this appeared to deteriorate the support material between the Veroblack and Veroclear, no futher sanding or polishing was attempted.

Fig. 86: Left: Future dispay fibers glued into the LEDs. Right: Making room for the fibers.

Multiple attemps were made to join LEDs to the fiber ends using glue and epoxy resin, but none of these solutions were strong enough. A solution was found by drilling holes into the LEDs themselves using a 2mm drill very carefully. Since only 4 more leds could be accommodated on the arduino mega, 2 fibers were inserted per LED and were fixed using polyacrylate glue. The LEDs were braced upon soldering boards and soldered to wires. Black duct tape was used to shield the leds from any light not originating from the fibers.

The simulated print display The simulated print was made up of two halves that fit into the foam body. Small 1mm holes were left into one of the two housings and after cleaning the printed parts 1mm plastic optic fiber was inserted in these holes. As the amount of ports on the electronics used were limited only 8 of the Fig. 85: The printed simulated display shells

Fig. 87: (l) The fibers solidfied into the future display housing. (r) Leds glued and soldered.

83

21. Proof of Concept Prototype

Fig. 88: Left: Arduino Mega Prototyping Board. Right: The Prototype’s electronics.

The electronics For driving and controlling the LEDS an Arduino MEGA 2560 was used. Two different LED matrices were used, both of the Kingbright brand, having 3mm red LEDs. Both a common anode type (TA12-11EWA) and a common cathode (TC12-11HWA) were used to compare performance. The LEDS used for the simulated print (‘future’) display were part of a KEMO S093 LED set and were selected for their responsiveness to light and brightness. All cathodes were connected to Analog ports on the arduino via 150 Ohm resistors, fully utilizing all ports on the Mega. All anodes were connected to digital ports. Component

Anodes

Cathodes

Notes:

future display Green LED x2

D22-D23

A0-A1

Yellow LED x2

D24-D25

A2-A3

LED matrix 1

D26-D30

A4-A10

common anode

LED matrix 2

D31-D37

A11-A15

common cathode

printed display

Fig. 89: Table showing how the prototype’s LEDs are soldered

The dashboard The dashboard was fashioned with 18mm MDF panels and was upholstered with stretchable

84

Fig. 90: Left: Upholstered dashboard. Right: Skye fabric used for upholstery.

Skye fabric. The upholstery was fastened using thumbtacs. As all wires are soldered to a soldering board and the board is too large to fit through the steering axle, a small canal was routed in the dashboard to accomodate the cables when removing the steering wheel from the dashboard.

Programming The arduino was programmed using the Arduino 1.03 software and using the FrequencyTimer 2.0 library. The timer library enabled acurate refreshrates for the LED matrices while keeping sensing-routines working in parrallel. The sourcecode for the demo shown at the validation was built upon that of the previous prototyping experiments and can be found in appendix N.

21.6 The finished prototype Due to time constraints it was not possible to upholster the prototype prior to the validation step, but a demo showcasing the capabilities was programmed and the prototype was assembled using black duct tape.

The Demo Program The program used for the validation with experts is a very simple demo showing sense and display capability of lightguide displays. Both the future display and printed display are active at the same time. The printed display shows red arrows rapidly moving across the display either from left to right or from right to left. A simple touch of the future display changes direction of these arrows. This touch event is registered by averaging the sensed light across all four pixels

Observations In testing the prototype some obser vations could be made: The difference in brigtness between the printed display and the simulated print display is significant. I expect the following causes to be plausible:

Fig. 91:Top Left: The working prototype. Top right: The future display. Bottom Left: Printed display showing arrows going to the left. Bottom Right: A touch event changing the direction of the arrows

and comparing it to a set threshold value, which is a very crude method. The arrows were created by creating a set of five bitmap patterns which were cycled through and were flipped in case the direction changed. It deserves mention that while the printed display was found capable of sensing, this was left out of the demo. Furthermore, the future display is on continously and does not show dynamic graphics. While the whole system is wired so that both of these things are possible, it proved too challenging to accomplish a combination of this in the time available before the validation. The demo program can be found in Appendix Q.



As discussed, the attenuation in printed lightguides causes loss of light, so this is to be expected. Especially the outer pixels, corresponding to the tubes that are curved the most, suffered from this problem. Its possible that the support layer does not properly cover the tubes for total internal reflection, but without cutting the display open this is hard to acertain.



It could be that the interface between the printed lightguides and the LED matrices causes some lightloss.



The LED matrices have to be multiplexed: That means that one row of pixels is lit at a time, and this is alternated over all rows, which mean that every row of leds is only lit 1/5 or 1/7th of the time. Meanwhile, the future display does not have this problem, but equally splits sensing time and displaying time, so these leds are on half the time. Pulse width modulation (PWM) is a well known method for dimming LEDs, so this could be a big factor for the difference in brightness. This could also explain why the common cathode matrix is dimmer than the common anode matrix, as it has to alternate between seven rows rather than five.

It was also apparant from trying the displays before and after sanding that a well polished pixel surface reduces the viewing angle of the pixels. This can be explained due to the light exiting the surface nearly parrallel to the surface due to the smoothness of the fiber end. A more rough surface will diffuse the light so that it is viewable from a wider angle. Furthermore, in light environments the printed display was hard to perceive.This could be helped by using more powerful LED matrices or reducing the attenuation loss in the printed fibers. Lastly, the prototype was very susceptible to different lighting conditions mainly due to the sensing threshold method used.

Fig. 92: The bitmap patterns used to create moving arrows.

85

22. Prototype Validation 22.2 Process Too see whether my vision and proof of concept have any validity in real design practice, I visited two professional design firms to demonstrate and discuss the prototype in an hour-long session. Among a shortlist of bureau’s, two were selected on the basis of size and proximity. At Spark design in Rotterdam a meeting was held with 5 industrial designers with various expertises. At Flex/ the INNOVATIONLAB in Delft, I had a session with one industrial designer. Both companies have around 25 employees and have much experience in the field of industrial design, working in a large variety of fields and winning many design prises.

During both sessions an assistant was present to make fotos, notes, help with the demonstration and oversee that audio recording of the meeting went well.

22.1 Goals

The following questions were prepared beforehand:

The main purpose of the validation was to confirm whether my vision resonated with professionals upon seeing the prototype. However, other questions that I wanted to see answered were:



What new possibilities do you see with this product feature in terms of new functionalities?



In what fields do you see applications for this?



What do you expect to be the consequences for product design?



What will be the major challenges in designing products containing these features?



What do you consider to be disadvantages of this feature?



What do you consider to be advantages of this feature?



How likely do you see this being used in products in the (near) future?



Will freeform surface interfaces appear in the near future?



Do freeform surface interfaces have a raison d’etre?



What are critical hurdles to be taken in order for this technology to be implemented?

The sessions were started by a brief introduction of myself and the project. Then the prototype was demonstrated, and I explained the working principles of the lightguide interfaces. I refrained from mentioning any affordances I had identified, or major challenges in the design of these displays in the hopes that the experts would name them themselves.

22.3 Results During both meetings it was very clear that the designers were enthousiastic about the feature as shown and described. However, perhaps naively, I had underestimate the designer’s curiosity. I spent much time answering specific questions about the functioning of the design and only after a while was the workings and repercussions of the feature understood by everyone. While the questions formulated were kept in the back of my mind, the meetings turned more into sharp dialogues about possibilities and challenges.

Fig. 93: The session at Spark (Left) and at FLEX / the INNOVATIONLAB (Right)

86

In the end of both meetings I was asked to elaborate on my vision as it was clear that the designers would not pull it out of thin air. However, upon doing so, again interesting dialogue ensued. While the structure of the meetings was not how I had planned them to be, they were

of great value and certainly acertained many, if not all of the ideas in this thesis. Preferraby, both meetings would be fully transcribed in the appendix, but due to time considerations I’ve opted to summarize choice quotes and opinions on various topics in the following paragraphs.

Spark The presented prototype and vision clearly resonated with the designers at Spark. There was consensus that both printing these features as using actual optical fibers are not viable for use in mass production at this moment. However, they were convinced that printing should be possible when a ‘tipping point’ had been reached where additive manufacturing machines became much more widespread. They estimated 15 years as a reasonable timeframe for large surface displays that would be used in all types of fields, naming automotive as an example. The key factors for this becoming a reality would be the resolution of the display and the cost price. They identified some niche markets in which the technology could already find application. Crucial was that these markets had small series and high margins and specifc demands for form freedom. An example mentioned was the creation completely fluid and smooth interfaces at hospitals that need to get disinfected after each operation and would be able to display other interfaces for each specific operation. Another example was the Fonckel lamp which was described to have ‘unattractive lighting’, but could be redesigned without any capacitive meshing with just fibers and power LEDS. They also expressed that this technology offers other interesting avenues apart from mere formfreedom. These features could be used to make dual body-fully rigid products that are joined using an O-Ring and would be strong and resistant to moisture, dirt and electrical sparks. A choice quote was: “If you can manage to find the right materials, you could create a PDA that can be overrun by a truck and still function.” Furthermore, they stated several examples of product cases that they had had in the recent past where this technology could potentially solve problems that are very hard to be solved otherwise. O ne su ch e x ampl e w as t he d e s i g n of a w ate r pro of w i rel e ss we i g h i ng s c a l e t h a t w a s d e s c r i b e d a s a ‘ n i g h t m a r e t o g e t w a t e r p r o o f e d ’.

There was consensus that it would lead to simplification of buildup of products which was described as something that ‘would make us very happy’. However, there was some concern about the fact that every pixel would need to be present ‘somewhere else in the product’, and to this end, it would be preferrable to develop fibers that taper out towards the surface so that they can remain as thin as possible inside the product. It was mentioned that very small displays already exist, for example in pocket beamers, that have an ‘engine’ the size of a postage stamp. They noted that tactile feedback is lacknig from this system and that this would mean that it would not be usable in some situations. Swipe gestures and tap gestures would, if executed correctly be enough for use in a steering wheel, but there was an aspect of tactilty present in current car controls that has a ‘grab and control’ quality that would be impossible to replicate. The possibility of displaying information at locations dynamically was found to be very relevant and was seen to fit in a larger trend of car manufacturers working to create head-up displays. There was much interest in seeing whether this technology could be used to print such head up display windows. Lastly, it was noted that “When grabbing becomes operating, that brings very profound possibilities”. “Almost all products we design have screens nowadays. This will get more and more relevant.”

Flex The designer at Flex was also convinced that the design of these interfaces would not be suitable for mass produced products, but would fit small series, high-end products which have a function that is coupled strongly to their physical shape. She found that the possibility of applications was very broad: “In theory you can apply this in any shape where you want to have interactive elements.” Two examples of possible applications were mentioned that would already be possible in the near future. One was the development of control systems for handicapped people which have very specific form demands, as a person who has no fingers requries a different interface from someone who doesnt have a thumb. For such interfaces, a custom interface could be printed for each person, keeping the basic electronics the same. Such interfaces would be ‘relatively cheap’ compared to other solutions.

87

22. Prototype Validation A high-end example would be a custom made-remote for a wealthy sjeik that wants a remote in the shape of his palace. She didn’t see real challenges in designing products with these features, provided that the modeling of the tubes would be somehow facilitated. It was also mentioned that using this technology it would become possible for consumers to create their own interfaces in a very low-tech way. An option was to generate standard blocks of fibers that could be customly routed into the desired shape. The ‘sensing’ aspect was found to be significant mainly in physically curved shapes. In flat products “it would be extremely challenging to surpass the current display and sensing capabilities.” However, other methods of generating freeform surface displays, such as imbedding LEDs into a product surface “would lead to more complexity rather than simplification”.

22.4 Conclusions While the structure of both meetings was different that was planned, I find that the vision was strongly confirmed by the professionals, and that the prototype was found convincing enough to demonstrate it. To quote one of the designers at Spark: “This is extremely cool. I’m convinced that anyone who sees this will understand the interaction and will be convinced that what you’re presenting is very possible.” At both meetings designers emphasized multiple times that they already saw possible applications of this product feature, but that this would be limited to high-end, small-series very specific niche products. Critical hurdles, were found to be the development of smoother printed fibers, the reliability of sensing, higher resolution displays (and consequently, tapered fibers), and design tools for the modeling of fibers.

88

89

23. Evaluation 23.1 evaluation of the project At the end beginning of this project it was impossible to determine which route it would take, but the assignment was clear, and it seems a good idea to evaluate the work done in light of the original assignment: This assignment aims to explore applications of AM in designing products with unique visual properties. The starting point of the assignment is developing a thorough understanding of the technology and creating an overview of the possibilities that AM offers in creating part features with optical and visual properties unique to AM. Through experimentation, different possibilities will be explored. While there was some exploration of applications in terms of possibilities offered in creating new visual product features, this was far from exhaustive. The choice for the two main directions studied (Objet & FDM) at the start was of a practical nature, in that it was based on availability of tools. While I am sure I have gotten a clear overview of the field of additive manufacturing, I feel there is an incredible scope of visual influences and possible optical product features still undiscovered. That being said, the end result is clearly a new and unique product feature and fullfills the goal of the assignment in that sense. The next step is researching how these possibilities can be used to create new products or services. This requires identifying markets where these techniques can be an added value and exploring relevant search fields. While this step of the assignment has been somewhat filled in by the Validation of the prototype, the actual application of and identification of markets for Freeform Surface Interfaces has largely been left uninvestigated. I feel that this is due to the analyses done on technical aspects of the product feature and time spent prototyping, but also due to the conceptual and explorative nature of the entire project. I’m certain, however that finding applications for these interfaces offers enough material to warrant an entire new graduation project. The final steps are the designing and detailing of a proof-of-concept application of the explored features. The result should be evaluated using feedback from experts from relevant fields. This step has been fully fullfilled in my opinion, with the side notion that some experts were

90

already consulted during the project in earlier prototyping steps. Futhermore, the detailing of the proof-of-concept was steered to demonstrate the potential of the designed product feature, and did not factor in many other design contstraints that would be considered when designing a steering wheel. To summarize, while some aspects of the assignment were only lightly touched upon in light of an explorative style project, the main goal of the assignment has been fullfilled in that the results demonstrate a clear possibility for creating new visual product features using additive manufacturing.

23.2 recommendations for further research Fairly soon after the project turned to investigation into the creation of lightguide interfaces, it became apparant that the subject is very complex and incorpoorates many disciplines, ranging from the fundamental properties of additive technology and components used to the far reaching implications that the feature has on the design of products. Consequently, there are many challenges still left unturned that merit further inquiry. The following overview is an attempt to mention some of them, though the list will be far from exhaustive.

Additive manufacturing of optical geometry The properties of printed optical interfaces between different transparant materials needs to be investigated further. The link between process variables in printing and the surface profile of printed material interfaces is largely unclear, and it would be of great benefit if changes in proces variables could already increase the surface quality of material interfaces. In order to reach surfaces for optical applications not only the droplet size needs to be severely reduced, but probably improvements will be needed in the precision with which is printed as well as the hardening process. In order to create total internal reflection in solid products, transparant materials of differing refractive indices need to be created.

CAD generation of lightguides If FSI’s are to be applied in products they need to be designable by designers. During this project it has become apparant that creating lightguide paths using current CAD tools, especially in products that are freely curved, is both laborious and difficult. Tools for the facilitation of this process would be very useful, especially when they are linked to design intentions that arise in the actual application of FSI’s.

Detailing of Lightguide Structures Regarding the embodiment of (printed) lightguides in FSI’s, four main problems require a solution. Fiber tapering In order to keep as much form freedom as possible, the possibility of printing tapered fibers should be investigated. Tapered fibers would allow the source LED matrix to remain small, and would alow a greater number of fibers to pass neck segments in products. Only when such fibers can be fabricated could the possibility of printing full-surface product features be realized. LED-fiber interface Secondly, the connection between fibers and LED’s needs to be optimized so as to minimize any losses across this junction. While I have not investigated this, there ought to be examples in the field of optic fiber communication systems. Ordering of lightguide pixels on surface The way how fiber endings can be alligned on freeform surfaces needs to be investigated. To (visually) cover the entirity of such a surface area with circular pixels requires packing the pixels in certain ways, which will probably require pixels of different diameters. Because this is linked to the way the lightguides are routed within the object this will be a complex problem. Futhermore, the consequences for the displaying of grapical

elements such as buttons and text needs to be investigated. A good starting point might be a Msc. thesis looking into autmatic pixel packing on arbitrarily curved smooth surfaces [67]. Critical viewing angle Results early in the proces showed the viewing angles for polished surface lightfiber displays to be severely lacking. Unfortunately there is a dilemma: While the viewing angle improves when the surface is more irregular due to light diffusing at the surface, light sensing benefits from a smooth surface as this provides more information about the direction of blocked light. A solution for this problem needs to be found.

Sensing improvement The sensing is currently very sensitive to ambient light conditions. Theoretically it should be possible to sense light from neighbouring fibers that scatters through the surface of the human skin. This technique is already used in ordinary LED matrices [link to patent], and should also be possible when using waveguides as an intermediary and warrants further investigation. Furthermore, when higher resolution displays become possible, they begin becoming very crude cameras that can take ‘snapshots’ from their surroundings. This should allow sensing user gestures and other interesting possibilities. The LEDs used in the prototypes aren’t optimized to work as photodiodes. It might be possible to find or create LEDS that are specifically optimized to both tasks.cognitive ergonomics of freeform surface displays

Design for FSI’s Because this feature offers a completely different different approach to designing product control features and display features, the implications for designers need to be understood. User research should provide many insights into the cognitive ergonomics of FSI’s. However, it is unclear what kind of tools are needed to develop user interfaces on the surfaces of products, both in terms of methodology and in software.

91

23. Evaluation Other properties of Lightguide-dominated objects Filling up entire objects with lightguide matrices undoubtedly affects their structural performance, and this needs to be further investigated. Additionally, it would be interesting to find out how well current printed materials cope with changes in temperature and chemical deterioriation.

Cost of FSI’s The cost-effectiveness of FSI’s needs to be investigated to identify in which product niches it is currently viable, but also to project when they will become viable as mass-producable product features

Conclusions The promise of printed lightguides to fullfill the vision of Freeform Surface Interfaces is tantalizing and offers a broad range of challenges to be solved. Looking back, it seems this has been a step in the right direction, but that the majority of work is still ahead!

92

23.3 Reflection and Acknowledgements Looking back on an individual project of this scale brings back many memories, frustrating and elating, and exposes our profound inability to foretell the future.

Expectations Taking on this project came from a curiousity into the world of 3d printers, which I had never before been in contact with. I had read many things about it, ofcourse, but had never used one or knew about the vast array of different technologies that exist. Admittedly I was hoping to work on the technology itself before I took on the assignment of looking at visual properties, and was a bit dissapointed I only ‘got to work’ on product esthetics. During my studies my interest was never in the courses that concerned esthetics and form, but moreso in the technical and abstract. However, when I now look at what came out of the process, I could not wish for any other assignment, and the project has given me a new view of a more technical, explorative designing that I felt very much at home in. This project has reminded me why I started studying Industrial Design in the first place. In that sense, I regard the project to be a great personal succes.

Process and Planning However, when I look back at the process it was far from smooth. Even though I tried to plan as well as I could, the project was of such an explorative nature that I had trouble looking at the steps ahead, often trying to figure out what step I was really in. Although I do think that a more guided and delineated assignment would be easier for me in that sense, it would also have limited me in the ability to freely explore interesting avenues in depth which has allowned me to learn a great deal about a broad range of subjects during this project. The process itself, when I look back at it in the report, looks rather jagged, but to me it only feels as such when the project is viewed in a more classical design process. I feel that many if not all of the subjects investigated were imporant to understand the complex problems surrounding this project feature, but the inability to convey it as a logical set of smooth steps has been very frustrating throughout the process, and I am at a loss to imagine how such a thing should be done.

Consulting with experts I’ve learned during this project that asking the right person for a cup of coffee to discuss what you’re working on is one of the most valuable ways of learning and gaining new insights. This project would’ve been simply impossible without the dozens of meetings with university employees, people from the industr y and close friends. Not only was their input useful in an informational sense, but it was often also the only source of motivation that kept me going. Seeing someone else become enthousiastic about what i was doing helped remind me that most of my negativity towards my own work was unwarranted.

Supervision My mentor during this project has been very supportive all the way and was very easy to work with. I was very happy that he thoroughly understood what I was doing and was able to correct me when I made mistakes or give useful advice when I was stuck with a problem. Many suggestions were given for alternative avenues to explore and were a consistent source of motivation.

Environment Lastly I want to mention that the working environment at the Faculty’s foundational labs and workshop was very conducive to a good working atmosphere. Not only was everybody willin to help out with any problems I had, but they were also good in keeping me motivated by showing me interesting things or talking about my progress.

Concluding.. I’m happy with the outcome of the project and have found out many new interests. I look forward to more projects of this nature! To everyone who helped me during this project in one way or another: Thank you!

93

References [1] Multiple Authors (2013, Aug 4). Refractive index - Wikipedia, the free encyclopedia [Online]. Available: http://en.wikipedia.org/wiki/Refractivity

[19] Datasheet sPro™ 60 SD production/spro-60-sd

[2][3] Multiple Authors (2013, Aug 4). Transparency & Translucency - Wikipedia, the free encyclopedia. [Online] http://en.wikipedia.org/wiki/Transparency_and_translucency

[20] Datasheet Matrix 300+ [Online] Available at: http://www.mcortechnologies.com/3d-printers/ matrix-300-plus/

[4] Multiple Authors (2013, Aug 4). Reflectivity - Wikipedia, the free encyclopedia. [Online] http:// en.wikipedia.org/wiki/Reflectivity

[21] Datasheet LENS 450 [Online] Available at: http://www.optomec.com/Additive-ManufacturingDownloads/Product-Datasheets

[5] Glenn Elert (2013, Aug 4), Color - The Physics Hypertextbook [Online] Available: http://physics. info/color/

[22] Wohlers, T., Lecture slides, Rapidtech, 2012

at:

www.3dsystems.com/3d-printers/

[6] Multiple Authors (2013, Aug 4). Graphic - Wikipedia, the free encyclopedia. [Online] http:// en.wikipedia.org/wiki/Graphic

[23] Hsiao, A. (2013, May 2) Terapeak Trends: 3D Printer RepRap Showing Signs of Growth. [Online] Available at: http://www.terapeak.com/blog/2013/05/02/terapeak-trends-3d-printer-reprap-showingsigns-of-growth#

[7] Unknown Author (2013, Aug 4), Surface Texture in 3 Easy Steps [Online] Available: http://www. digitalmetrology.com/SurfaceFinishIn3Steps.htm

[24] Unknown Author, (2013, Aug 4), Rapid Prototyping Patents. [Online] Available at: http://www. additive3d.com/pat_db.htm

[8]

D. L. Bourella et. al, A Brief History of Additive Manufacturing and the 2009 Roadmap for Additive

Manufacturing: Looking Back and Looking Ahead. RapidTech, 2009 [9]

G., Andreas, Rapid prototyping, Hanser, 2003

[10] Chua, C.K. et. al, Rapid prototyping: Principle & Applications. World Scientific, 2003 [11] G., Andreas, Understanding Additive Manufacturing: Rapid Prototyping, Rapid Tooling, Rapid Manufacturing, Hanser, 2012 [12] Mueller, T. (2007, March 1st), Additive Fabrication Creates New Markets for Investment Casting [Online] Available: http://www.moldmakingtechnology.com/columns/additive-fabrication-createsnew-markets-for-investment-casting

[25] Wohlers, T., Wohlers Report 1998. Colorado: Wohlers Associates, 1998 [26] Wohlers, T., Wohlers Report 2009. Colorado: Wohlers Associates, 2009 [27] Wohlers, T., Wohlers Report 2012. Colorado: Wohlers Associates, 2012 [28] Willis, K.D.D, et. al, Printed Optics: 3D Printing of Embedded Optical Elements for Interactive Devices. UIST’12, October 7–10, 2012, Cambridge, Massachusetts, USA. [29] Moilanen, J. & Vadén, T. (2012, 31 May): Manufacturing in motion: first survey on the 3D printing community, Statistical Studies of Peer Production. [Online] Available at: http://surveys.peerproduction. net/2012/05/manufacturing-in-motion/ [30] Bamfield, P. Chromic Phenomena. Royal Society of Chemistry, 2010

[13] ASTM F2792-12a:Standard Terminology forAdditive Manufacturing Technologies. DOI:10.1520/ F2792-12A, 2013

[31] Unknown Author, (2013, February) Top 10 Strategic Technology Trends for 2013. [Online] Available at: http://www.gartner.com/technology/research/top-10-technology-trends/

[14] Interview with M. Visser at LuxExcel Group B.V., Goes, 17 Oct 2012

[32] Ashton, K. (2009, June 22) “That ‘Internet of Things’ Thing”, RFID Journal, [Online] Available at: http://www.rfidjournal.com/articles/view?4986

[15] Datasheet ProJet™ HD 3500 & HD 3500Plus Professional 3D Printers [Online] Available at: www.3dsystems.com/3d-printers/professional/projet-3500-hdmax [16] Datasheet Objet500 Connex [Online] Available at: http://www.stratasys.com/en/3d-printers/ design-series/precision/objet-connex500 [17] Datasheet ZPrinter® 650 [Online] Available at: http://www.zcorp.com/en/Products/3D-Printers/ ZPrinter-650/spage.aspx [18] Datasheet Fortus 900MC [Online] Available at: http://www.stratasys.com/3d-printers/ production-series/fortus-900mc

94

[Online] Available

[33] Benko, H. Beyond Flat Surface Computing: Challenges of Depth-Aware and Curved Interfaces. MM’09, October 19–24, 2009, Beijing, China [34] Yu, Z. et al. Intrinsically Stretchable Polymer Light-Emitting Devices Using Carbon Nanotube-Polymer Composite Electrodes. Adv. Mater., 23: 3989–3994., 2011 [35] Wicker, R. Multi-Material, Multi-Function, Multi-Technology Additive Manufacturing: Is this Real?, Lecture Slides from 2012 International Conference on Additive Manufacturing, July 2012 [36] Cencen, A. Embeddables: Automated Embedding of Electronic Components in 3D Printed Products M.S. thesis, DE, DUT, Delft, 2012

[37] Unknown Author, (2013, Aug. 4) Optomec - Additive Manufacturing Technology for Printed Electronics. [Online] Available at: http://www.optomec.com/Additive-Manufacturing-Technology/ Printed-Electronics [38] Day, B. (2013, Jun. 4) CLEO: 2013 Features New Research in UV Light for Food Storage, Printable Quantum Dot LEDs, and Smartphone Disease Detection. [Online] Available at: http://www.osa.org/enus/about_osa/newsroom/newsreleases/2013/cleo_2013_features_new_research_in_uv_light_for_fo/ [39] Multiple Authors (2013, Aug 4). Remote Control - Wikipedia, the free encyclopedia [Online]. Available: http://en.wikipedia.org/wiki/Remote_control [40] Tesla, N. Method of and apparatus for controlling mechanism of moving vessels or vehicles. US Patent. 68,809. Nov. 8, 1898 [41] Foster, R. & Kreitzman, L., The Rhythms of Life: The Biological Clocks That Control the Daily Lives of Every Living Thing” Publisher: Profile Books Lt, 2005

[53] Multiple Authors (2013, Aug 4). Photodiode - Wikipedia, the free encyclopedia [Online]. Available: http://en.wikipedia.org/wiki/Photodiode [54] Cook, M. (2013, Aug 4). LED Sensing. [Online] Availale at: http://www.thebox.myzen.co.uk/ Workshop/LED_Sensing.html [55] Han, J.Y., Low-Cost Multi-Touch Sensing through Frustrated Total Internal Reflection, UIST’05, October 23–27, 2005, Seattle, Washington, USA [56] Moeller, J & Kerne, A., ZeroTouch: An Optical Multi-Touch and Free-Air Interaction Architecture, CHI 2012, May 5–10, 2012, Austin, Texas, USA [57] Benko, H. et al. Sphere Project. [Online] Available at: http://research.microsoft.com/en-us/um/ people/benko/projects/sphere/ [58] Han, J.Y. FTIR Touch Sensing [Online] Available at: http://cs.nyu.edu/~jhan/ftirsense/

[42] Unknown Author, Store Catalog (2013, May 3): Vibrating Alarm Clocks for deaf and hard of hearing. [Online] Available at: http://www.harriscomm.com/catalog/default.php?cPath=42_123

[59] Interview with B. Lenseigne at Delft University of Technology, Faculty of Mechanical Engineering, Robotics Dept.. ., Delft, 9 Apr 2013

[43] Klauer, S.G. et al. The Impact of Driver Inattention on Near-Crash/Crash Risk: An Analysis Using the 100-Car Naturalistic Driving Study Data. DOT HS 810 594, TRAIS, Virginia, 2006

[60] Interview with J. Verlinden at Delft University of Technology, Faculty of Industrial Design Engineering. JMP Course in creating Augmented Reality System, Delft, May 2012

[44] McEntegart, J. (2013, May 28.) Tom’s Guide: 12 Remote Control Apps for Android Devices. [Online] Available at: http://www.tomshardware.com/news/smartphone-remote-TV-Remote-App-DesktopControl-Remote-Desktop-Android,22735.html

[61] Multiple Authors (2013, Aug 4). Fiber Optics - Wikipedia, the free encyclopedia [Online]. Available: http://en.wikipedia.org/wiki/Fiber_optics

[45] Law, I. (2013, Aug.) 4. Here’s the proper way to hold a steering wheel. [Online] Available at: http:// drivingtests101.com/articles_19_Here’s-the-proper-way-to-hold-a-steering-wheel [46] Peddinti, V.K. (2008 Light Emitting Diodes) (LEDs) [Online] Available at: http://www.ele.uri.edu/ courses/ele432/spring08/LEDs.pdf [47] Multiple Authors (2013, Aug 4). List of LED failure modes - Wikipedia, the free encyclopedia [Online]. Available: http://en.wikipedia.org/wiki/List_of_LED_failure_modes [48] Multiple Authors (2013, Aug 4). Light Emitting Diode - Wikipedia, the free encyclopedia [Online]. Available: http://en.wikipedia.org/wiki/LED [48] Multiple Authors (2013, Aug 4). Flexible organic light-emitting diode - Wikipedia, the free encyclopedia [Online]. Available: http://en.wikipedia.org/wiki/Flexible_organic_light-emitting_diode [50] Snoeckx, K. (2009, Apr. 07). PRESS RELEASE: Low-cost, large-area production of flexible OLEDs a step closer. [Online] Available at: http://www.holstcentre.com/en/NewsPress/PressList/Agfa_ITOfree_ OLED.aspx

[62] Ezekiel, S., (2010, Feb 17), Fiberoptics Fundamentals | MIT Understanding Lasers and Fiberoptics [Online ] Video Lecture. Available at: http://www.youtube.com/watch?v=jy9VSNXkbx4 [63] Multiple Authors (2013, Aug 4). Attenuation - Wikipedia, the free encyclopedia [Online]. Available: http://en.wikipedia.org/wiki/Attenuation [64] Feit, M.D. & Fleck, J.A. Jr., Light propagation in graded-index optical fibers, APPLIED OPTICS Vol. 17,No.24,15 Dec. 1978 [65] Cook, Simple Read 6 LED Sensors [Online] Avaialble at: http://www.thebox.myzen.co.uk/ Workshop/LED_Sensing_files/LED_Sensor.pde [66] Interview with R. Santbergen at Delft University of Technology, EWI faculty, Photovoltaics Mat. and Dev. , Delft, 26 Apr 2013 [67] Hoebinger, M. Packing circles and spheres on surfaces, M.S. thesis, DMaG, VUT, Vienna, 2009

[51] Soos, A. (2013, Feb. 6.) LED Criteria. [Online] Available at: http://www.enn.com/enn_original_ news/article/45566 [52] Dietz, P. et. al. Very Low-Cost Sensing and Communication. UbiComp 2003, Seattle, Washington, October 12-15, 2003

95