Fall 2019 Colloquia

Distributed and Collective Robots as Ubiquitous Interfaces between Human, Computers, and Environments
Speaker: Ryo Suzuki
Tuesday, December 10, 11:30am - 12:30pm MT
Abstract: In the near future, robots increasingly enter our everyday life and the number of robots we interact with would significantly increase as they become more ubiquitous. If the hundreds or thousands of robots are distributed and embedded in our environment, these robots are becoming indistinguishable from everyday objects; they are gradually not perceived as robots but rather as collective things that move around us.
Robots as Things is a manifestation toward this vision in which distributed and collective robots are seen and used as elements for ubiquitous tangible interfaces. This provides a unique perspective to push the boundary of today鈥檚 tangible interfaces; the distributed and autonomous nature of these robots gives tangible objects more agency to be at hand on demand; they only appear when needed and disappear into the background when not. Moreover, they are active and responsive, thus they can represent the dynamics and fluidity of underlying digital information within the physical world. As the boundary between the robots and things blurs, the way we interact with them can be significantly different from current human-robot interaction, which primarily focuses on the foreground interaction with a single robotic agent (e.g., humanoid or personal robots).
To rethink a new interaction paradigm, I propose two key concepts: 1) robots as a medium: using collective robots as elements for graspable media that serve as an object to think, design, and communicate ideas, 2) robots as ambient assistants: leveraging distributed robots to provide an ambient affordance and in-situ assistant from the background. In the first direction, I explore the use of these robots as a dynamic medium for tangible interactions. By leveraging the collective behavior of discrete elements, they can reconfigure themselves to represent digital information as collective matter. In the second direction, these distributed robots can be used as an assistant that can calmly support human interaction without requiring active attention. In a similar way that today鈥檚 ubiquitous computing does, but in a way that they can actively engage in the physical world as distributed affordances. This thesis illustrates these concepts through demonstrating prototype systems and application scenarios at different scales 鈥 material scale (i.e., mm-scale: Dynablock, Reactile, FluxMarker), object scale (i.e, cm-scale: ShapeBots, MorphIO), and room-scale (i.e., m-scale: LiftTiles, RoomShift). Finally, I describe the path to make this vision into reality by speculating the near future scenarios and identify underlying research opportunities and challenges.
Bio: Ryo Suzuki is a 5th year Ph.D. student at the 糖心Vlog破解版, Department of Computer Science, advised by Daniel Leithinger and Mark Gross. His research interest lies in Human-Computer Interaction, specifically focusing on tangible user interfaces. In his previous work, he has been exploring how distributed and collective robots and active materials can be used as dynamic physical interfaces and media that can weave themselves into the fabric of everyday life. During his Ph.D., he has published more than eleven full papers and six poster/workshop papers at the top conference venue in HCI, such as CHI, UIST, and DIS and won one Best Paper award. Previously, he did an internship at Stanford, UC Berkeley, University of Tokyo, and Adobe Research. He is also a recipient of JST ACT-I funding and the Nakajima Foundation Scholarship.

Hybridizing Digital and Physical Worlds through Augmented Reality Computer Systems Research
Speaker: Robert LiKamWa
Tuesday, December 10, 11:30am - 12:30pm MT
Abstract: Augmented reality (AR) provides an overlay of digital material over the physical world, enabling use cases in data visualization, instructional guidance, immersive education and entertainment. But beyond simple overlays, what opportunities can we pursue to further integrate our digital and physical worlds? In this talk, we will showcase some recent research projects of AR systems software and hardware, including improved efficiency and precision through visual computing pipeline optimizations, our GLEAM framework for illuminating virtual AR materials with lighting estimates from the physical environment, and our SWISH haptic devices that make AR fluids feel fluid-like. With a fuller integration between the digital and he physical, we鈥檒l be able to create opportunities for future immersivity.
Bio: Dr. Robert LiKamWa is an assistant professor at Arizona State University, appointed in the School of Arts, Media and Engineering and the School of Electrical, Computer and Energy Engineering. LiKamWa directs the Meteor Studio (towards Mobile and Embedded Technologies for Experiential/Optimization Research), which designs software and hardware systems to raise the performance, efficiency, and expressiveness of smartphones, tablets, VR/AR, and other mobile systems. His research is supported by four NSF grants, and funding from the Samsung Mobile Processing Innovation Lab. LiKamWa completed his BS, MS, and PhD degrees at Rice University in the Department of Electrical and Computer Engineering.

What Information should a Robot Convey
Speaker: Hooman Hedayati听
Tuesday, December 3, 11:30am - 12:30pm MT
Abstract:听Robotic technologies are becoming increasingly pervasive within industrial and domestic settings, resulting in more frequent interactions between humans and robots. In order to ensure these interactions are effective, the field of Human-Robot Interaction (HRI) has argued that robots and humans must establish a shared common ground by communicating fundamental pieces of information to each other, such as their intentions, goals, plans, status, etc. While a relatively large body of work has explored how robots might signal individual aspects of such information to users, we still know relatively little regarding 鈥渨hat鈥 information should we signal and the relative importance of such information overall (e.g., is communicating robot status more important than robot goals?). Such information is necessary for robots acting in the wild to enable creating prioritized lists of communicative goals as, at any given time, it is unlikely that a robot will be able to convey all possibly relevant or important aspects of information to users. Prioritizing information for users is a complex problem as many factors might influence information priority, including task context, user expertise, and robot capability.
My assumption is that there might be a gap between existing literature on what information robots should convey to users and what users actually want to know. To investigate this, I aim to learn what users what to know and how important this information is it to them, as well as explore what has been done in the field of HRI so far. My ultimate goal is to determine "what" information users want to know about robots (e.g., battery status, the next destination of the robot, etc.) and "how" should such information ought to be conveyed to the user (e.g., LEDs, sound, gestures, etc.).
Within this overarching research agenda, there are four research questions that I am interested in answering. First: 鈥渨hat information should the robot convey to the user?鈥 Answering this question will help designers to design based on users鈥 need and also fulfill users' understanding of the robots i.e., if a user wants to know the current task progress instead of the robot signaling the battery percentage. The second question is: 鈥渨hat is the order of importance of this information,鈥 e.g., if information on safety and robot direction are both important, which is more important to users? The third question is: 鈥渨hat gaps exist between the first question and the current literature?鈥 Finally, how we can improve current signaling mechanism and proposing designs to signal to users more efficient.
I have already taken an initial step towards answering the first two research questions by exploring what types of information users request and how the rankings of informational importance that users assign change in a prototypical shared-environment interaction with three different types of robots. My results, collected from 150 participants on Amazon's Mechanical Turk, generally found that users value information related to the robot's battery, capabilities, task, safety, navigation, communication, and privacy, with user priorities of these items varying across a small ground robot, a large ground robot, and an aerial robot. I observed that safety is the most important category for all robot sizes (small/big) and types (aerial/ground). In all the responses, the most important single piece of information users wanted to know was: 鈥渨hether or not it is safe to get close to the robot.鈥 Also, information about the robot itself, such as 鈥渉ow听 to look up more information about the robot,鈥 was the least important听 based on user听 responses.
For the next step, as a prelim exam, I will do a literature review of 20-30 papers published in robotics conferences (RSS, HRI, ICRA) over the past few years to survey various approaches in robot signaling mechanisms and determine how prior work aligns with the preliminary data I have collected.
Bio:听Hooman is听doctoral student in the Department of Computer Science. His research interests include human-robot interaction, machine learning and robotics. He is currently studying the interactions between flying robots and humans.

PhET Interactive Simulations applications:Learning Resources, Multimodal Playground, and Tools for Change
Speaker: Emily Moore
Tuesday, November 19, 11:30am - 12:30pm
Abstract: The PhET Interactive Simulations project has developed more than 150 free and open source interactive science and math simulations over the last two decades. Used around the world, and considered a mainstay in STEM classrooms, PhET has a rich history of innovation.听 In this presentation, I鈥檒l share reflections and insights from one of PhET鈥檚 research initiatives - making the most accessible multimodal learning resources available at scale. We鈥檒l explore what it takes to make a visual learning tool accessible (and enjoyable!) non-visually, explore the intersections of diverse modalities (e.g., description, sonification, haptics), how we make these resources readily available to the world, and what鈥檚 next in innovation for PhET.听
Bio: Emily is the Director of Research & Accessibility for PhET Interactive Simulations. Her work advances the development and use of multimodal interactive learning resources that support accessible, effective, and enjoyable science learning for all students, including students with disabilities. With research interests at the intersections of science education, multimodal educational technology, and inclusive design, she conducts research on multimodal simulation design, and student use and learning in K-12 and undergraduate settings. In particular, Emily enjoys leading large collaborative research projects and partnering with kind and interesting people with diverse expertise. Her current collaborations span three countries, three US states, and her team at PhET spans 7 time zones.

GestureLock: The Security and Usability of Freeform Gestures for Phone Unlock
Speaker: Ian Oakley
Friday, November 8, 11:30am - 12:30pm
Abstract:听Touchscreen gestures are attracting research attention as an authentication method. While studies have showcased their usability, it has proven more complex to determine, let alone enhance, their security. Problems stem both from the small scale of current data sets and the fact that gestures are matched imprecisely 鈥 by a distance metric. This makes it challenging to assess entropy with traditional algorithms. To address these problems, we captured a large set of gesture passwords (N=2594) from crowd workers, and developed a security assessment framework that can calculate partial guessing entropy estimates, and generate dictionaries that crack 23.13% or more gestures in online attacks (within 20 guesses). To improve the entropy of gesture passwords, we designed novel blacklist and lexical policies to, respectively, restrict and inspire gesture creation. We close by validating both our security assessment framework and policies in a new crowd-sourced study (N=4000). Our blacklists increase entropy and resistance to dictionary based guessing attacks.
Bio:听Ian Oakley is an associate professor at the School of Design and Human Engineering at UNIST in South Korea. He holds a BSc (Joint Honours First Class) in Computing Science and Psychology and a PhD in Computer Science from the University of Glasgow, UK. He has worked in Ireland (MIT MediaLab Europe), Korea (GIST and ETRI) and Portugal (University of Madeira) and spent time as a visiting professor in Korea (KAIST) and the USA (Carnegie Mellon HCII). His research focuses on the design, development and evaluation of multi-modal interfaces and social technologies. He has published on this topic in both leading conferences (such as ACM CHI, ACM TEI and ACM CSCW) and journals (such as the IJHCS and IEEE Computer). He is deputy editor of Interacting with Computers and is, although he no longer sounds like it, Scots.

Designing Communicative Visualization for People with Intellectual Developmental Disabilities
Speaker: Keke Wu
Tuesday, November 5, 11:30am - 12:30pm MT
Abstract:听Visualization research has paid little attention to individuals with intellectual developmental disabilities (IDDs). This lack of attention is problematic due to the fact that the consumption of visualization relies on a significant number of cognitive processes, including the ability to read and process language and retain information, and these processes often operate differently for IDD individuals. In this talk, Keke Wu argues that visualization should be used to communicate with and by IDD populations. She will be discussing about how we could use different visualization design elements including chart types, data continuity, and chart embellishments to improve the communication between people with IDDs and data. As part of the post-conference report, she will also share her experience of the IEEE Vis 2019 Conference, and talk about the current trend of visualization research.
Bio:听Keke is a first year ATLAS PhD student working in the field of visualization with Dr. Danielle Albers Szafir. Her research interest is to design and explore new approaches to make technology more accessible to human beings. She loves doing visualization research and building tools that can help people communicate with data and solve real-life problems in an interactive and interesting way.
听

Context is for Kings - Heuristics, data and machine learning
Speaker: Nikolaus Klassen
Tuesday, December 29, 11:30am - 12:30pm MT
Abstract:听Big Data and Machine Learning applications today rely first and foremost on the volume and quality of data used to build and train their models. Given that we increasingly look to the brain as a source of inspiration to improve our Machine Learning models, it is notable that the brain favours a tool that stands in marked contrast to our current approach. The brain loves to use heuristics instead of exhaustive data. Heuristics can make us very successful in ambiguous environments, they make the brain computationally highly efficient and they are one of the reasons that some things which seem so simple to humans are so difficult to replicate with machines, e.g., Natural Language Processing. On the other hand, heuristics can lead to wild errors, especially in situations that need a quantitative solution. Most heuristics are built on two qualities. They aim for simplicity which means reducing - not increasing - the amount of information that is considered. And they rely on understanding the context of problems or situations to solve them. While the first seems unnecessary - or even undesirable - for Machine Learning, the second quality still seems to be beyond the reach of machine capabilities. And yet, the ongoing arms race for data with its expanding cost of resources on the one hand, and the growing awareness of the limits of current Big Data and Machine Learning models on the other makes one wonder what would happen if we could teach machines to use heuristics. In this talk, I would like to explore this scenario and consider questions like: What would machines that are capable of using heuristics look like? How should they decide when to use which heuristic? And how would this impact the interaction between humans and machines?
Bio:听Dr Nikolaus Klassen is fascinated by how the human brain selects data, applies a diverse mix of strategies to process it and creates the knowledge humans accept as true and act upon. He has explored this process in contexts as diverse as the writing of poetry 1500 years ago and the purchase of a pair of sneakers in our time. After studying History, he went into applied research and created a web-based platform for Knowledge Management in companies. For two years, he worked as a data analyst for Google. In this role, he explored how to use technology to extract insights from large and diverse data pools and also taught retailers how to work with data. Returning to university, he wrote his PhD thesis on how Early Christian poetry (4th and 5th century CE) re-used and adapted concepts and patterns of thinking from the non-Christian Roman world around it. After more than a year of family leave during which he was watching the brains of two little girls develop those strategies of data processing with breathtaking speed, he is now planning to return to working as a data analyst.

Magic and Communication
Speaker: Eddie Goldstein
Tuesday, December 22, 11:30am - 12:30pm MT
Abstract:听There are many parallels between being a good educator and a good magician.听 In both cases, you demonstrate phenomena, guide your audience to make sense of what they are seeing, and leave a strong and memorable impression.听 The key DIFFERENCE is that, as an educator, you try to convey a picture of the real world and how it works; as a magician, you try to convey a picture of a magical world.听 BUT THE TECHNIQUES THAT YOU USE TO ACCOMPLISH THESE TWO GOALS CAN BE VERY MUCH THE SAME.
In the spirit of gaining practical knowledge that works, the session participants will leave with an appreciation for the cognitive strategies that magicians employ and how to use them to strengthen their own science demonstrations.听

Fabricating (Smart) Textiles -- Computational Design, Craft, and Radical Possibility
Speaker: Laura Devendorf
Tuesday, December 15, 11:30am - 12:30pm MT
Abstract:听For the past 9,000 years, humans have been refining techniques and machinery for textiles production. Credited with the birth of the industrial revolution as well as computing, textile practices, metaphors and innovations are so interwoven into our daily experience that it鈥檚 easy to take their complexity and impact for granted. This talk will focus on re-acquainting the audience with textile manufacturing while addressing the potential for computational design to impact textile surface design, the integration of interactive or 鈥渟mart鈥 components, and sustainable innovation. I will argue that developing these tools may require new methods and structures for involving craftspeople in artists in their development as well as new ways of understanding the role of design software within improvisational craft practices. As such, insights generated by addressing textiles fabrication can reshape how we understand human-machine collaboration more broadly.
Bio:听Laura Devendorf, assistant professor of information science with the ATLAS Institute, is an artist and technologist working predominately in human-computer interaction and design research. She designs and develops systems that embody alternative visions for human-machine relations within creative practice. Her recent work focuses on smart textiles鈥攁 project that interweaves the production of computational design tools with cultural reflections on gendered forms of labor and visions for how wearable technology could shape how we perceive lived environments.听
Laura directs the Unstable Design Lab. She earned bachelors' degrees in studio art and computer science from the University of California Santa Barbara before earning her Ph.D. at UC Berkeley School of Information. She has worked in the fields of sustainable fashion, design and engineering. Her research has been featured on National Public Radio and has received multiple best paper awards at top conferences in the field of human-computer interaction.听

Self-healable, recyclable and reconfigurable electronics
Speaker: Jianliang Xiao
Tuesday, December 8, 11:30am - 12:30pm MT
Abstract: Electronics, such as smart phones and wearable devices, increasingly play important roles in our daily life. Associated with the mass production and usage of electronics is that tens of millions tons of electronic waste (e-waste) is being produced every year (42 million tons of e-waste was generated in 2014), and 70% of the e-waste goes directly to landfills. This presentation gives an introduction to the self-healable and recyclable electronics technology recently being developed in our lab. In the first part of the seminar, I鈥檒l demonstrate a robust yet rehealable, fully recyclable and malleable e-skin based on dynamic covalent thermoset doped with conductive silver nanoparticles. Tactile, temperature, flow and humidity sensing capabilities are realized. The e-skin can be rehealed when it鈥檚 damaged, and can be fully recycled at room temperature. After rehealing or recycling, the e-skin regains mechanical and electrical properties comparable to the original e-skin. In addition, malleability enables the e-skin to permanently conform to complex, curved surfaces without introducing excessive interfacial stresses. In the second part, I鈥檒l discuss our recent progress on high-performance, integrated electronics that鈥檚 self-healable, recyclable and reconfigurable. This development heterogeneously integrates high-performance but rigid silicon chips, stretchable liquid metal, and self-healable and recyclable soft polymer. By combining novel strategies in materials advances and mechanical design, this heterogeneous system yields robust wearable electronics that can monitor human activities and physiological signals. The development along this direction could yield an economical and eco-friendly technology that can find broad applications in robotics, prosthetics, healthcare and human-computer interface.听
Bio: Jianliang Xiao is an Associate Professor in the Department of Mechanical Engineering at 糖心Vlog破解版. Before joining CU-糖心Vlog破解版, he was a Postdoctoral Research Associate in Prof. John Rogers鈥 group at the Department of Materials Science and Engineering, University of Illinois at Urbana-Champaign. He obtained his Ph.D. degree in Mechanical Engineering in 2009 from Northwestern University, under the supervision of Prof. Yonggang Huang. His B.S. and M.S. degrees were both from Tsinghua University in 2003 and 2006, respectively. His research interests include stretchable/flexible electronics, nanomaterials, soft materials and thin films.

Flying Snakes and other outreach tools
Speaker: Shaz Zamore
Tuesday, December 1, 11:30am - 12:30pm MT
Abstract: Flying snakes (genus Chrysopelea) are highly visual animals that climb, jump, and glide while navigating through tropical rainforests. These arboreal behaviors are likely to produce different visual problems than terrestrial locomotion. Arboreal behaviors require visual assessment to determine position, distance, and speed, particularly while gliding. Flying snakes possess features of visually sensitive animals, including large eyes and well-described visual behaviors, such as tracking planes and birds flying overhead. In my work in the Socha Lab at Virginia Tech, I measured their visual capabilities, in order to build an Immersive Virtual Visual Arena (IVVA) with which I could test real-time behavioral decision-making strategies. Outside of the lab, I found a deep curiosity about flying snakes from the public, and engaged in many outreach ventures, from classroom visits to museum exhibit design.
In this talk, I鈥檒l review my behavioral research at Virginia Tech, with a strong focus on experimental design and myriad snake flight videos. I鈥檒l also discuss preliminary results, broader applications of the research, and connect this work to outreach by discussing psychology of human curiosity and exploration.听
Bio: Sharri Zamore (Dr. Z) is a first-generation American (Jamaica, Commonwealth of Dominica, W.I.) with deep interest in creating accessible science outreach and education (with a neuroscience focus). In her research career, Dr. Z has developed dynamic, interactive tools to study behavioral neuroscience of tree swallows (Tachycineta bicolor), rats (Ratus norvegicus), mosquitoes (Aedes aegypti) and flying snakes (Chrysopelea spp.)鈥攜es, snakes that can fly. Outside of research, Dr. Z is an awarded athlete (Golden Gloves Regional Champion, 2006, AIARE I certified snowboarder), and highly experienced vocalist, who has performed at the White House, and with Nappy Roots. She merges her dynamic life and science interests to create unique, entertaining and educational outreach events. Dr. Z is deeply invested in developing immersive informal educational tools鈥攆rom home and classroom activities to large-scale installations.

Socially Assistive Robotics: Overview and Case Study of the Romibo Robot
Speaker: Aubrey Shick
Tuesday, September 24, 11:30am - 12:30pm MT
Abstract: Romibo is an open-source socially assistive robot for older adults and children with special needs, particularly autism. Born as a laser-cut class project that grew into an international platform meeting thousands, Romibo serves as an example of how field-deployable prototyping can have real impact. The project continues as a successful therapeutic intervention, with hardware and software derivatives seeing over 10,000+ users in 7+ countries. Romibo has been featured in XPLORATION 2050, an Emmy-nominated series for Discovery Channel and IQSmartparent with Angela Santomero, Executive Producer of Blues Clues and Daniel Tiger's Neighborhood.
Bio: Aubrey Shick is the Head of Technology and Research at Fine Art Miracles, Inc. (FAM), a non-profit using robots successfully with autism and memory care patients, serving hundreds of clients a year and delighting thousands through public outreach. Outside of her non-profit work, Aubrey works in consumer hardware. She worked for Intel in the New Devices group, as a Product User Experience Lead for enterprise and consumer head-worn products including JetPro and Vaunt. After Intel, Aubrey returned to social robotics as the Head of Human-Robot Interaction and User Experience at Embodied, Inc. a robotics 鈥渨ellness鈥 startup in stealth mode based in Pasadena, CA for a year and a half. She鈥檚 recently moved to 糖心Vlog破解版, CO, where she鈥檚 working on a cross-platform software tool for socially assistive robotics.

My Life as a Synner
Speaker: Eric Lindemann
Tuesday, September 17, 11:30am - 12:30pm MT
Abstract: Eric Lindemann will talk about his evolution from musician-composer to computer music/audio DSP engineer. In the process he will talk about the evolution of computer music over the last 50+ years. He will focus in particular on his widely used Synful Orchestra software which is credited with raising the bar for expressive music synthesis.
Bio: Eric Lindemann is an engineer, musician and composer. He is inventor of Synful Orchestra, raising the bar for expressive music synthesis. He led the design of the IRCAM Signal Processing Workstation for Pierre Boulez in Paris, that was used in computer music facilities around the world and gave rise to the popular program Max/MSP. He designed DSP Microprocessors for Cirrus Logic, and participated in the design of the first fully programmable DSP hearing aid for GN Resound. His acoustic echo cancellation system for QSC is used at the United Nations, Shell Oil and in boardrooms across the globe. He worked on noise cancellation for the iPhone. He and his daughter Anna have published papers on generative musical composition models inspired by evolutionary and developmental biology (Evo-Devo). He studied music composition with Nadia Boulanger, Olivier Messiaen and Iannis Xenakis. He played keyboards for numerous movie scores (Star Trek, ...), and toured with pop groups including the Fifth Dimension and the Osmond Brothers.

Playing with Words
Speaker: Joel Swanson
Tuesday, September 10, 11:30am - 12:30pm MT
Abstract: Language is always embedded within technology. From ink and paper to neural networks, the technologies, systems and structures of language play an active role in shaping the potentials and norms of discourse. Joel Swanson's work explores these technologies in an attempt to critically subvert the underlying assumptions and intentions of linguistic discourse. In this talk, he will give an overview of his work and discuss his theoretical and conceptual foundations for his creative practice.
Bio: Joel Swanson is an artist and writer who explores the relationship between language and technology. His work critically subverts the technologies, materials, and underlying structures of language to reveal its idiosyncrasies and inconsistencies. His work ranges from interactive installations to public sculptures that playfully and powerfully question words and their meanings.
Swanson teaches courses on typography, creative coding, and media theory at the ATLAS Institute at the 糖心Vlog破解版. He received his Masters of Fine Art at the University of California, San Diego with a focus in Computing and the Arts. His artwork has been exhibited nationally and internationally at institutions such as the Broad Museum in Lansing, The Power Plant in Toronto, the Glucksman in Cork Ireland, and the North Miami Museum of Contemporary Art. In 2014 Swanson had a solo exhibition at the Museum of Contemporary Art Denver. In 2017 he showed work in Personal Structures, an official satellite show of the 57th Venice Biennale. He is represented by David B. Smith Gallery in Denver.

Sensing Kirigami
Speaker: Clement Zheng
Tuesday, September 3, 11:30am - 12:30pm MT
Abstract:听This pictorial presents our material-driven inquiry into carbon-coated paper and kirigami structures. We investigated two variations of this paper and their affordances for tangible interaction; particularly their electrical, haptic, and visual aspects when shaped into three-dimensional forms through cutting, folding, and bending. Through this exploration, we uncovered distinct affordances between the two paper types for sensing folds and bends, due to differences in their material compositions. From these insights, we propose three applications that showcase the possibilities of this material for tangible interaction design. In addition, we leverage the pictorial format to expose working design schematics for others to take up their own explorations.听听
Publication:听
Clement Zheng, HyunJoo Oh, Laura Devendorf, and Ellen Yi-Luen Do. 2019. "". In:听Proceedings of the 2019 Designing Interactive Systems Conference (DIS '19). (San Diego, CA, June 23-28, 2019).听[ Award].听
Bio:听Clement Zheng听听is an industrial designer, researcher and educator. His work spans computational design, making and tangible interaction design. He is passionate about facilitating other designers in their creative processes, especially as they integrate digital fabrication and physical computing to realize their ideas.听
听

Personal Biochips
Speaker: Mirela Alistar
Tuesday, August 27, 11:30am - 12:30pm MT
Abstract: In my work, I investigate how to ubiquitize healthcare by moving the process of diagnosis closer to the patient. Today, diagnosis requires patients to see a doctor to provide samples, which are then sent to a wetlab. The lab conducts tests on the samples and reports back to the doctor, who ultimately reports back to the patient. This process tends to take days or even weeks 鈥 valuable time during which patients live in uncertainty and disease is allowed to spread. What if instead doctors could perform the tests while the patient waits? Or, what if we could empower patients to perform selected tests at home, as part of their decision whether to see a doctor in the first place?
In this seminar, I will present my on-going work on designing and fabricating novel biochip hardware (recently adopted by researchers at universities like MIT and the University of Washington), writing system-level software (real-time compilation and fault-tolerant synthesis), and developing a visual system to edit bio-protocol interactively鈥.
Bio: Dr. Mirela Alistar is currently Assistant Professor at the 糖心Vlog破解版, after having completed a fellowship program at the Hasso Plattner Institute in Germany. In 2014, Mirela received her PhD in computer engineering from the Technical University of Denmark, where she worked on system-level design of embedded systems with a special focus on digital microfluidic biochips. In her research, Mirela investigates the extent to which can we change healthcare to make it a personal process. So far, Mirela has built systems based on biochips to serve as personal laboratories: small portable devices that people can own and use to develop customized bio-protocols ("bio-apps").
In order to engage society in critical discourse on technology, Mirela has led and founded community wetlabs in Copenhagen and Berlin, where she organizes monthly workshops to promote personal biochips to enthusiasts of diverse backgrounds (e.g., engineering, art, design).