Spring 2019 Colloquia

Art, a Matter of Mind
Speaker: MC Flux
Tuesday, April 30, 11:30am - 12:30pm MT
Abstract:听In our modern world of smartphone cameras and Instagram, our connection with art is often distilled down to a single digital picture smaller than our fist. But what do we lose by collapsing our multidimensional world into a tiny, flickering, 2D image? Even when we encounter individual works of art in a museum or gallery, the average time Americans spend looking at a work is between six and ten seconds. How, in our culture of increasing distraction, might we truly connect with the beauty around us? What drives our connection to art? What neural processes underlie our ability to slow down and reflect?听 How does this intricate interplay of mind and experience inform our aesthetic response?
听
Bio:听M. C. Flux, M.S. M.A.听Ph.D. Student in Neuroscience and Clinical Psychology
Flux is graduate student at CU 糖心Vlog破解版 with a decade鈥檚 worth of research in fields spanning molecular biology to human behavior. While currently working on a joint Ph.D. in Clinical Psychology and Neuroscience, his research at CU centers around identifying behaviors and biomarkers that facilitate our ability to overcome stress and bolster mental health. With an undergraduate degree in Biotechnology from Thomas Jefferson University, an MS in Neuroscience from NYU, as well as an MA in Clinical Psychology here at CU 糖心Vlog破解版, he is most interested in research questions that lie at the intersection of molecular biology, neuroscience, and mental health.听 His collaborative attitude has led to the development of a diverse portfolio of dissertation projects including studying the immune and mental health effects of isolation/floatation tank therapy, whole-body hyperthermia, and several other clinical collaborations, as well as developing novel analysis techniques to explore the interconnection of biomarkers associated with stress and response to prejudice in American Indian populations.听 In addition to his scientific work, Flux is a graphic artist who illustrates all of his presentations, and was previously a special effects makeup artist when living in NYC.听听

Embracing the Unknown
Speaker: Martha Russo
Tuesday, April 23, 11:30am - 12:30pm MT
Abstract:听All of my work is purposefully obscure. It is just out of the grasp of language and thus brings us back to our rudimentary way of collecting information, namely, through the senses and the body. Although my work is steeped in the ceramics鈥 process, it side steps the traditions of the earth-bound, fragile, and precious material. Rather, the installations embrace the precarious. They extend into space, hover in mid-air, barely hold on, pile up, and are sometimes on the verge of disappearing into dust. The chameleon-like properties of clay and, specifically, its tenuous nature speak to the immediacy and transient nature and fragility of life. Coupled with this quietness, the massive installations have a certain energy and force that further connect us to our roots and our origins. I want my works to get into your bones and guts, to touch on the raw, the visceral, the nerves; to murmur up through the body to make a time and place for contemplation and reflection about our basic biological humanness.
Bio:听Martha Russo (b. 1962, Milford, Connecticut) earned her BA in developmental biology and psychology from Princeton University in 1985. In 1984, she suffered a career-ending injury while vying for a spot on the United States Olympic Field Hockey Team. After recovering from surgery, Russo was attracted to the physical nature of sculpture. She began studying studio arts in Florence, Italy in 1983 and continued studying ceramics at Princeton University. In 1995, she earned her MFA at the 糖心Vlog破解版. Russo exhibits her sculptures and installations nationally at venues such as the Alan Stone Gallery in New York, Denver Art Museum, Museum of Contemporary Art/Denver, and The Santa Fe Art Institute. Her work was recently the focus of a 25-year survey at the 糖心Vlog破解版 Museum of Contemporary Art in 2016. Through the socially and politically based art collective, Artnauts, Russo shows her 2-dimensional works in hundreds of exhibitions in 18 countries since joining the group in 1996. In addition to her studio practice, Russo is an instructor at the 糖心Vlog破解版 and before that, she taught at Rocky Mountain College of Art + Design in Denver for 18 years. Russo is represented by Goodwin Fine Art in Denver. She lives in the mountains northwest of 糖心Vlog破解版, Colorado with her husband, Joe Ryan, and two children, Odelia and Henry.

Laura and Mikhaila CHI Talks preview: HCI amusement and Software for Smart Textile
Tuesday, April 16, 11:30am - 12:30pm MT
From HCI to HCI Amusement: Strategies for Engaging what New Technology Makes Old
Speakers: Laura Devendorf (presenting), Kristina Anderson, Daniela Rosner, Ron Wakkary, James Pierce
Abstract: Notions of what counts as a contribution to HCI continue to be contested as our field expands to accommodate perspectives from the arts and humanities. This paper aims to advance the position of the arts and further contribute to these debates by actively exploring what a "non-contribution" would look like in HCI. We do this by taking inspiration from Fluxus, a collective of artists in the 1950鈥檚 and 1960鈥檚 who actively challenged and reworked practices of fine arts institutions by producing radically accessible, ephemeral, and modest works of "art-amusement." We use Fluxus to develop three analogous forms of "HCI-amusements," each of which shed light on dominant practices and values within HCI by resisting to fit into its logics.
Bio:听Laura Devendorf is an artist and technologist working predominately in human computer interaction and design research. She designs and develops systems that embody alternative visions for human-machine relations within creative practice. Her recent work focuses on smart textiles鈥攁 project that interweaves the production of computational design tools with cultural reflections on gendered forms of labor and visions for how wearable technology could shape how we perceive of lived environments. Laura is an ATLAS Institute Fellow and directs the Unstable Design Lab. She earned bachelors degrees in Studio Art and Computer Science from the University of California Santa Barbara before earning her Ph.D. at the UC Berkeley School of Information. She has worked in the fields of sustainable fashion, design, and engineering. Her research has been featured on National Public Radio and has received multiple best paper awards at top conferences in the field of human-computer interaction.
听
AdaCAD: Crafting Software for Smart Textiles Design
Speakers: Mikhaila Friske (presenting), Shanel Wu, Laura Devendorf
Abstract: Woven smart textiles are useful in creating flexible electronics because they integrate circuitry into the structure of the fabric itself. However, there do not yet exist tools that support the specific needs of smart textiles weavers. This paper describes the process and development of AdaCAD, an application for composing smart textile weave drafts. By augmenting traditional weaving drafts, AdaCAD allows weavers to design woven structures and circuitry in tandem and offers specific support for common smart textiles techniques. We describe these techniques, how our tool supports them alongside feedback from smart textiles weavers. We conclude with a reflection on smart textiles practice more broadly and suggest that the metaphor of coproduction can be fruitful in creating effective tools and envisioning future applications in this space.
Bio:听Mikhaila Friske is a Ph.D. student studying information science in the College of Media, Communication and Information. She received a bachelor鈥檚 degree in computer science from the University of Minnesota - Twin Cities in 2017. Advised by Laura Devendorf, Mikhaila researches smart textiles and textile design software in the Unstable Design Lab. Mikhaila's interests include exploring the cross-section between handcraft and technology; and technologies that empower underrepresented groups.

How Do We Baby Proof "Smart" Homes?
Speaker: Tom Yeh
Tuesday, April 9, 11:30am - 12:30pm MT
Abstract:听AI systems are now in millions of American families. Our children live and interact with them, sometimes spending hours at a time a day. Meanwhile,听 US colleges are projected to produce more than 400,000 computer science graduates by 2020, according to the Bureau of Labor Statistics. As they learn to design and develop AI, will they also learn to think about our children to make sure children are protected, supported, nurtured, respected, and treated fairly by the AI systems? Unlikely. How about children with different abilities, such as children who are blind or visually impaired (BVI)? Will their needs be considered? Even more unlikely. If nothing is changed, many future AI engineers will pursue only speed and accuracy and pay little or no attention to AI鈥檚 potential impacts on children. Already, alarming cases have been recently reported in press and literature where AI systems purposefully or inadvertently manipulate, belittle, mislead, endanger, or discriminate children . The societal cost of inaction can be enormous. How should we respond by involving researchers in AI, human-computer interaction (HCI), and child development (CD)? During the seminar, I would like to invite the audience to discuss the three interdisciplinary questions: (1) [AI+CD] How do interactions with AI affect the psychological development of young children?, (2) [HCI+CD] What design factors of an interactive AI system may mediate these effects?, and (3) [AI+HCI] How can AI incorporate these design factors at the algorithm and model levels?
Bio:听Tom Yeh received his PhD from the Massachusetts Institute of Technology for studying vision-based user interfaces. In 2012, he joined the 糖心Vlog破解版 (CU) as an assistant professor in the Department of Computer Science. Prior to joining CU, he was a postdoctoral fellow at the University of Maryland Institute for Advanced Computer Studies (UMIACS). Yeh鈥檚 research interests include 3D printing, big data, citizen science, and mobile security. He has published more than 30 articles across these interest areas. He has received best paper awards and honorable mentions from CHI, UIST, and MobileHCI. In 2014, he received the Student Affairs Faculty of the Year Award. Yeh鈥檚 research projects are funded by the National Science Foundation (NSF) and the Defense Advanced Research Projects Agency (DARPA).

Vigor of movement and economic utility: is movement a window to the mind?
Speaker: Alaa Ahmed
Tuesday, April 2, 11:30am - 12:30pm MT
Abstract:听To understand subjective evaluation of an option, various disciplines have quantified the interaction between reward and effort during decision making, producing an estimate of economic utility, namely the subject 鈥榞oodness鈥 of an option. However, those same variables that affect the utility of an option also influence the vigor (speed) of movements towards that option. To better understand this, we have developed a mathematical framework demonstrating how utility can influence not only the choice of what to do, but also the speed of the movement follows. I will present results demonstrating that expectation of reward increases speed of saccadic eye and reaching movements, whereas expectation of effort expenditure decreases this speed. These results and others imply that vigor may serve as a new, real-time metric with which to quantify subjective utility, and that the control of movements may be an implicit reflection of the brain鈥檚 economic evaluation of the expected outcome.
Bio: Dr.听Ahmed's research focuses on understanding how the brain controls movement. She uses a neuroeconomic approach that combines techniques from neuroscience, economics, psychology and engineering to investigate the costs and constraints underlying human sensorimotor decision-making, learning, and control. Dr. Ahmed is the recipient of an NSF CAREER Award and a DARPA Young Faculty Award presented to 鈥渞ising research stars in junior faculty positions at U.S. academic institutions鈥. Her work has been featured in Forbes, Wired, Time, PBS, and other national and international media outlets.

ShapeBots and Roomshift
Speaker: Ryo Suzuki
Tuesday, March 19, 11:30am - 12:30pm MT
ShapeBots: Shape-changing Swarm Robots for Collective and Individual Shape Transformation
Abstract: Swarm user interfaces have attracted much attention in HCI due to its unique capability of distributed and ubiquitous shape transformation. However, its homogeneous form factor of each swarm limits the range of interactions, expressions, and affordances of swarm UIs. To address this, we propose Shape-changing Swarm Robots, a new approach to distributed shape-changing interfaces that can change their shapes both collectively and individually 鈥 each swarm can individually transform its shape, while having a capability to move and collectively make a shape with many swarms. In this paper, we first explore the design space of self- and collective- shape transformation enabled by line-based shape changing swarm robots, and then demonstrate this idea by introducing ShapeBots, a proof-of-concept prototype of shape-changing swarm robots with a novel miniature reel-based linear actuator. Our actuator only has 1 cm thickness while it can extend to 20 cm, which enables a swarm of miniature robots to transform into large shapes. We demonstrate a set of application scenarios enabled by ShapeBots, including line-based swarm drawing and distributed 2.5D shape displays.
RoomShift: Reconfigurable Spatial Environments through Room-scale Dynamic Swarm Construction
Abstract: This paper introduces RoomShift, a room-scale shape-changing interface through dynamic swarm construction. Traditional shape-changing interfaces mostly focus on interactions at the scale of human hands, leaving larger-scale shape changing underexamined. In this work, we propose dynamic swarm construction, a class of systems which leverages a wheel-based robot to reconfigure itself or construct adaptive environments. We used 10 wheel-based robots (e.g., Roomba) that can dynamically move, transform, and construct spatial elements, such as walls, stairs, furniture, floor, and pillar. These robots can support two reconfigurations: 1) actuating existing objects e.g., chair and table and rearrange them, 2) transform its shape to dynamic and temporal building elements e.g., walls, pillars. Due to its distributed, scalable, and ubiquitous shape-changing capability, we demonstrate this configuration can provide a variety of spatial user interactions. We explore three possible application scenarios, including room-scale haptic interfaces for VR/AR, prototyping spatial layouts for architects, and adaptive dynamic furniture. We evaluate these applications through user evaluation. Finally, we outline a design space of dynamic swarm construction and discuss interaction possibility towards Human-Architecture Interaction.
Bio: Ryo Suzuki is a 4th year Ph.D. student in Computer Science at CU, 糖心Vlog破解版. He is advised by Daniel Leithinger (and closely working with Mark Gross and Tom Yeh), and a member of THING lab and ACME lab at the ATLAS Institute. His research interests lie in the area of Tangible User Interfaces, more specifically he is interested in spatial and ubiquitous shape-changing interfaces --- dynamic physical interfaces that can be distributed and embedded into an environment. During his PhD, he has published more than ten full papers to top conferences, including CHI, UIST, ASSETS, and ICSE.

Supporting Spatial Thinking Skills in Novice 3D Modelers through 3D Modeling and Augmented Reality
Speaker: Srinjita Bhaduri
Tuesday, March 12, 11:30am - 12:30pm MT
Abstract: Learning 3-Dimensional (3D) solid modeling using Computer Aided Design (CAD) softwares can be a challenge, requiring the user to have a good spatial understanding of 3D space and have the ability to adapt to a new environment where their existing skills and knowledge might not apply. This makes both teaching and learning 3D modeling difficult. Moreover, learning 3D modeling can help enhance spatial thinking skills, which plays a critical role in achievement in science, technology, engineering, and mathematics (STEM) fields. Spatial thinking skills can be enhanced by training, life experience, and practice. However, 3D modeling alone is not sufficient as a platform for rich 3D experience since 3D models are totally isolated from the actual 3D physical world. To address this challenge, I propose the idea of making 3D modeling approachable for users with varying levels of expertise by introducing different methods of 3D modeling. In my talk, I will provide initial results of using Augmented Reality as a scaffold for learning 3D modeling and as a 3D model debugger that can be used by novice users mostly focusing on middle and high school students.
Bio: I am a third year Ph.D. student in听and听Cognitive Science at听. I am advised by听Dr. Tamara Sumner听who leads the Sumner Lab (earlier known as听Digital Learning Sciences). My research interests lie in the area of听Human Centered Computing,听, and 3D Printing/Modeling. I am particularly looking at ways by which we can use Augmented Reality as a scaffold to help enhance 3D modeling skills of novices and in turn help enhance their spatial thinking abilities.

So You Want to be a Puppeteer?
Speaker: Kellie Masterson
Tuesday, March 5, 11:30am - 12:30pm MT
Abstract:听Puppets and puppetry have been part of human history since Homo sapiens bagan throwing shadows on cave walls. Animating inanimate objects for the purposes of play, storytelling, ritual and subversion is found in all cultures. In this talk we will look at the history of puppets and their uses, the types of puppets from the simplest to digital and the reasons for puppetry. You will also get a chance to make simple puppets.
Bio:听Kellie Masterson鈥檚 degrees are in anthropology/archaeology with a specialty in prehistoric technology. She has done substantive editing in fields as varied as entomology, agriculture and agronomy. More recently she has been playing in the College of Music designing and building installations and advocating for a puppet opera. As an analog type (material culture rules!), she looks forward to learning more about AR/VR applications.

Digital Sketching: Computer-aided Conceptual Design
Speaker: John Baccus
Tuesday, February 26, 11:30am - 12:30pm MT
Abstract: The practice of conceptual design in architecture and industrial design has received relatively less attention from the rising tide of computer-aided design (CAD) technology than those of documentation and simulation. And yet, every design project must pass through a conceptual design phase before it can advance into more detailed stages to follow. Questions like 鈥...what should we build?鈥 and 鈥...for whom are we building?鈥 are quickly passed over in a desire to more efficiently reach the detailed phases that follow. It is undoubtedly the case that traditional computing technology is well suited to handling detailed design and automating it with increasing degrees of fidelity. But humans are still better at doing conceptual design than machines are, and it doesn鈥檛 take much observation to reveal this. Most architects who care about design are still drawing by hand, on paper, at the beginning of their projects. And most car bodies are still styled by hand in clay. Let's talk about why this is the case and what sort of computational tools might help designers bridge into trusted systems for "computer-aided conceptual design."
Bio: John Bacus is a Product Management Director at Trimble Navigation, where he is responsible primarily for the growing SketchUp family of products. Before joining Trimble, John was SketchUp's Product Manager at Google and before that the Director of Product Design for @Last Software. He has worked on SketchUp from its first Mac OS X release through the product's acquisition by Google in 2006 and then again by Trimble in 2012. During this time, SketchUp has won numerous awards, including "3D Product of the Year" and a "5 Mice" rating from Macworld Magazine. In the last year alone, there have been over 30 million unique user activations of SketchUp鈥 making it the most widely used 3D modeling product in the world. Prior to @Last, John was a professional design consultant working on a wide range of architectural and urban design projects in both Europe and the U.S. In addition to his work on SketchUp, John hangs around the ATLAS Institute at the University of Colorado, where he teaches a practical, launch-oriented studio class in Software Product Management.

Inclusive Media Creation: Creating Equity in Access to Creative Practices
Speaker:Abigale Stangl
Tuesday, February 19, 11:30am - 12:30pm MT
Abstract: In this presentation I will present my job talk, presenting three areas of research centered on elucidating and reducing the social and technical factors that exclude people with disabilities from gaining media and information (MIL). MIL competencies that are vital to full participation in the contemporary information landscape and to an overall sense of self-determination and agency.
Bio: Abigale Stangl is an ATLAS PhD candidate advised by Dr. Tom Yeh in Computer Science. 听Abigale holds a Masters in Information Communication Technology for Development, a Graduate Diploma in Landscape Design, and a Bachelors in Environmental Design and Planning. 听Abigale has worked as a Research Assistant, an accessibility consultant for museums and libraries, a landscape designer, an installation fabrication assistant, an environmental educator, and watershed coordinator, and a field biologist all around the United States.听

Why can't programming be like sketching?
Speaker: Clayton Lewis
Tuesday, February 12, 11:30am - 12:30pm MT
Abstract:听At a joint meeting of the Psychology of Programming Interest Group and the Art Workers Guild (London, September 5-7, 2018),听 Charlie Gere asked, why can鈥檛 programming be like sketching? The ambiance of the meeting included testimonies from Guild members that cast computing, and programming, as repellent, in the literal sense, an activity that people would like to avoid, even it if it is useful. 鈥淪ketching鈥 in the question stands for another kind of activity, lacking these repellent qualities, and having the attractive qualities of enjoyable expression. Can programming be like that?听
Bio:听Clayton Lewis is Coleman-Turner Professor of Computer Science and Fellow of the Institute of Cognitive Science at CU.

Engaging Players Through Physical Embodied Game Interfaces
Speaker: Peter Gyory
Tuesday, February 5, 11:30am - 12:30pm MT
Abstract:听This talk will present the work of Peter鈥檚 design thesis, which will examine the field of embodied game interfaces. The inherent playfulness of games makes them a powerful medium for experimenting with the social and emotional aspects of technology. Games offer opportunity to create unique and isolated interactions that would otherwise be difficult to study without the 鈥渕agic circle鈥 their mechanics create. Embodied interfaces are ones that bridge the gap between the digital and physical world by engaging the environment that the technology is embedded in, including physical movements of the users. In this talk Peter showcases a survey of different projects that examine how physical gameplay mechanics can the effect social, emotional, and learning elements of games; including his current project HOT SWAP, a reconfigurable game controller which requires players to swap components during gameplay.
Bio:听Peter Gyory is a Creative Technologies and Design Master鈥檚 Student at the ATLAS institute. He holds a bachelors in Game Design and Development from the Rochester Institute of Technology. Peter has worked as a Research Assistant, web developer, and game designer. His research primarily focuses on designing and studying multi-user tangible game interfaces that engage the physical space around players, and studying the way players communicate strategies with each other.

Earable Computers : Ear-worn Systems for Healthcare, HCI, BCI, and Brain Stimulations
Speaker: Tam Vu
Tuesday, January 29, 11:30am - 12:30pm MT
Abstract:听This talk introduces the concept of "Earable computers", small computing and actuating devices that are worn inside, behind, around, or on user's ears. Earable sensing and actuation are motivated from the fact that human ears are relatively close to the sources of many important physiological signals such as the brain, eyes, facial muscles, heart, core body temperature, and more. Therefore, placing the sensors and stimulators inside the ear canals or behind the ears could enable a wide range of applications from human computer interaction, health care, attention/focus monitoring, and opioid use reduction, just to name a few. Drawing the analogy from the evolutions of mobile systems and wearable systems, in this talk, I will discuss the opportunities that earable system could bring. I will share our experience and lessons learnt through realizing such earable systems in the context of human computer interaction, brain computer interaction, and healthcare.听 I will also elaborate the software, hardware, and practical challenges of earable systems.
Bio:听Tam Vu is an Assistant professor of Computer Science Department at 糖心Vlog破解版. He directs听 lab at the university, where he and his team conduct system research in the areas of wearable and mobile systems, exploring the physiological signals of a user and use them for inventing new human-computer interaction techniques and health-care solutions. The outcomes of his works resulted in a NSF CAREER award, two Google Faculty Awards, nine best paper awards, best paper nomination, and research highlights in flagship venues in mobile system research including MobiCom, MobiSys, and SenSys. He is also actively pushing his research outcomes to practice through technology transfer activities with 15 patents filed and attracted external investment for two start-ups that he co-founded to commercialize them.

Mobile Visual Crowdsensing
Speaker: Qi Han
Tuesday, January 22, 11:30am - 12:30pm MT
Abstract:听Mobile visual crowdsensing (MVCS) uses built-in cameras of smart devices to capture the details of interesting objects/views in the real world in the form of pictures or videos. It has attracted considerable attention recently due to the rich information that can be provided by images and videos.听MVCS is useful and in many cases superior to traditional visual sensing that relies on deployment of stationary cameras for capturing images or videos. In this talk, I will听 first describe several building blocks for a cooperative visual sensing and sharing system: event localization, efficient picture stream segmentation and sub-event detection based on crowd-event interaction patterns, and picture selection for event highlights using crowd-subevent entropy of pictures.听 I will then present how MVCS is used for CrowdNavi: a mobile app we developed for last-mile outdoor navigation for pedestrians.听
Bio:听Qi Han is an Associate Professor听 in the Department of Computer Science at the Colorado School of Mines.听She founded and directs the Pervasive Computing Systems (PeCS) research group.听Her broad research interests lie in the areas of pervasive computing and mobile systems, with current focus on applying mobile sensing, crowdsourcing, Internet of Things, swarm robotics, and real-time analytics to understand human activities and improve safety and efficiency of human life.听She has also been active in interdisciplinary research where she has used wireless sensor networks for different applications such as monitoring of subsurface contaminant and development of energy efficient buildings, she is also working on techniques to enable large swarms of small spacecraft.听Her research has been funded by the National Science Foundation (NSF), National Aeronautics and Space Administration (NASA), Department of Energy through collaboration with the National Renewable Energy Laboratories (NREL), and the Petroleum Institute at Abu Dhabi, UAE. Dr. Han听 holds a Ph.D. degree from the Donald Bren School of Information and Computer Sciences at the University of California, Irvine. She has served on a number of technical program committees for international conferences and held several workshop or conference program chair positions.听 She is an ACM Distinguished Speaker, an ACM senior member,听and an IEEE senior member.听
听

Exploring VR and AR applications in the Museum and Medical Contexts
Speaker: Ellen Do
Tuesday, January 15, 11:30am - 12:30pm MT
Abstract:听Virtual Reality and Augmented Reality (VR, AR) are gaining a lot of attention in many domains. They can be effective in communicating contents that are three-dimensional, dynamic, and more engaging for the users or audience. While at the Keio-NUS CUTE Center, Ellen and researchers have worked with industries on over a dozen VR/AR projects, many in the museum and the medical contexts. In this talk, she will share the challenges in the collaboration experience of defining the project scopes, forming research questions, getting published and sponsorships.听
Bio:听Ellen Yi-Luen Do (Professor, ATLAS Institute & Computer Science) invents at the intersections of people, design and technology. She works on computational tools for design, especially sketching, creativity and design cognition, including creativity support tools and design studies, tangible and embedded interaction and, most recently, computing for health and wellness. She holds a PhD in Design Computing from Georgia Institute of Technology, a Master of Design Studies from the Harvard Graduate School of Design and a bachelor's degree from National Cheng Kung University in Taiwan. She has served on the faculties of University of Washington, Carnegie Mellon University and Georgia Institute of Technology. From 2013 to 2016, she co-directed the Keio-NUS CUTE Center in Singapore, a research unit investigating Connected Ubiquitous Technology for Embodiments. She served as Design Community Chair for ACM CHI 2012 and 2013, as an Associate Chair for Design subcommittee for CHI 2015 and am currently serving on the Steering Committee for ACM听 - Creativity and Cognition conference series (since 2015) and the Steering Committee ACM听 鈥 Tangible, Embedded and Embodied Interaction conference series (since 2011, and SC Chair 2016 - 2018), and Associate Editor for the ACM Journal of Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT). Recently she served as the conference or program co-chair for several international conferences: ACM C&C 2015 in Glasgow, C&C 2017 and Augmented Human 2015 in Singapore, and TEI 2017 in Japan.
听听听