Spring 2022 Colloquia

Programmable Reality: Making the World a Dynamic Medium through Visually and Physically Programmable Environments

Speaker: Ryo Suzuki
Thursday, March 10, 11:30am - 12:30pm MT

Abstract: Today, a computational medium and its representations are trapped as "pictures under glass"---what we can program is only limited with virtual objects on screens, which we cannot touch, manipulate, and interact with, in the same way, we do with real objects in the real world.
My research goal is to change this paradigm to make our whole living environment "a dynamic medium"---instead of confining ourselves to a flat rectangle screen, I envision a future where the world itself is an expressive canvas and dynamic physical medium, as if we are living in a computer, rather than living with it. For example, what if we could draw dynamic sketches that can be embedded and interact with the real physical environment? What if we could "render" an arbitrary physical shape, just like a 3D printer, but in a dynamic manner (i.e., render in seconds rather than hours)? What if our physical environments can transform and reconfigure themselves to provide haptic sensations or support our everyday life?
In this talk, I illustrate this vision of "Programmable Reality", which aims to blend pixels and atoms through both visually and physically programmable environments. By leveraging AR/VR, robotics, and shape-changing technologies, I show how we can transform and augment the world for the future of the dynamic medium. Then, I discuss open research challenges and opportunities to make this vision reality.

Bio: Ryo Suzuki is an Assistant Professor in Computer Science at the University of Calgary, where he directs the Programmable Reality Lab. Prior to joining UCalgary, he graduated from PhD at the 糖心Vlog破解版 in 2020, where he was advised by Daniel Leithinger and Mark Gross. His research interest lies at the intersection of Human-Computer Interaction (HCI) and robotics. He explores how we can combine AR/VR and robotics technologies to make our environments more dynamic, interactive, and programmable to further blend virtual and physical worlds. In the past 6 years, he has published more than eighteen full-paper publications at top HCI and robotics venues, such as CHI, UIST, IROS, ICRA, and received three awarded papers. He has also served as a program committee member for CHI and UIST among others. Previously he also worked as a research intern at Stanford University, UC Berkeley, the University of Tokyo, Adobe Research, and Microsoft Research.

Advancing Sound Accessibility

Speaker: Dhruv 鈥淒J鈥 Jain
Tuesday, March 8, 11:30am - 12:30pm MST

Abstract: The world is filled with a rich diversity of sounds ranging from mundane beeps and whirs to critical cues such as fire alarms or spoken content. These sounds can be inaccessible not only to people with auditory-related disabilities such as those who are deaf or hard of hearing (DHH) but also to hearing people in many situations. We all may find conversations difficult to hear in noisy bars, doorbells inaudible over a vacuum cleaner running, or may miss our phone ringing while in the shower.
My work advances sound accessibility by developing interactive systems that leverage state-of-the-art in machine learning, signal processing, and wearable technology to sense and provide sound feedback. To design these systems, I follow an iterative user-centric research process ranging from formative studies to design and evaluation of prototypes in controlled environments, to crucially, deployments of full systems in the field. In this talk, I will discuss my past and ongoing research to advance three areas of sound accessibility: providing awareness about everyday sounds, supporting speech-based conversations, and improving accessibility of sounds in emerging technologies such as AR/VR and smartwatches. I will conclude with outlining my future plans.

Bio: Dhruv "DJ" Jain is a final year PhD student in Computer Science and Engineering at the University of Washington. His research lies in Human-Computer Interaction (HCI) and focuses on accessibility. He has published over 20 papers in top HCI and accessible computing venues such as CHI, UIST, and ASSETS; seven have been honored with best paper and honorable mention awards. DJ's work has also been covered by the media (e.g., by CNN, New Scientist, and Forbes), is included in teaching curricula, and has been publicly launched (e.g., one system has over 75,000 users). During his graduate studies, he has worked at Microsoft Research, Google, and Apple on research addressing accessibility challenges on future commodity devices. DJ's work is supported by a Microsoft Research Dissertation Grant and a Google CMD-IT LEAP Alliance Fellowship.

Designing Computational Systems for Learning and Inclusion in a Future of Work

Speaker: Chinmay Kulkarni
Friday, March 4, 11:30am - 12:30pm MT

Abstract: Enabled by the internet, and accelerated by the pandemic, the future of work is already here. Today, we collaborate with distant colleagues we have never met in person, and employers rely on online labor platforms to find freelancers around the world. At the same time, computational work environments largely lack informal social interactions. Consequently, workers struggle to build rapport with colleagues, collaboration networks are siloed, and employers struggle to even evaluate potential workers. Based on my research that has resulted in tools that have helped millions of learners in massive online classes (MOOCs), I argue for a new approach to build computational work environments. Specifically, I show that combining findings from behavioral sciences with computational techniques can create social interactions that scaffold learning and weave these 听interactions into the fabric of work. In this talk, I demonstrate this approach with systems that help people learn ambiguous skills, foster an environment that welcomes diverse viewpoints to help teams make better decisions, and allow a more inclusive range of employers to benefit from this future of work. Together, these systems point to a future where computing can create work environments that support learning and inclusion better than traditional work possibly could.

Bio: Chinmay Kulkarni is an Associate Professor in Human-Computer Interaction at Carnegie Mellon University, whose research introduces technology for scaling education and online work. His lab has created systems that scale feedback and assessment to thousands of learners in massive online classes, systems that extend peer feedback to work contexts where competition may prevent honest feedback, and systems for learning how to adapt to new forms of work. More than 50,000 learners have directly benefited from these systems, and companies as varied as Coursera, Mozilla, and Instagram have adopted the related research findings, benefiting millions more. His lab is also developing community-based design approaches that can also yield scalable socio-technical solutions while still resisting the impulse to position certain community needs as edge cases. This research is currently supported by the NSF, the US Department of Education, and the Office of Naval Research. Past research sponsors include Mozilla and Instagram. Before coming to Carnegie Mellon, he earned a PhD from Stanford's Computer Science Department.

Toward an Ethics of AI Accessibility

Speaker: Cynthia Bennett
Thursday, March 3, 11:30am - 12:30pm MT

Abstract: Inaccessible information has wide-ranging consequences. These span people with disabilities being unable to read COVID-19 infographics to them being excluded from digital networks like social media and remote meetings which frequently and rapidly transmit highly visual information. One approach to increasing nonvisual access to information for people who are blind and low vision is artificial intelligence (AI), which promises automation and scalability, thereby decreasing the resources required to produce accessible information. However, research and media reports continue to illuminate AI-bias and malicious applications. As these harms tend to impact people who already experience marginalization, the ethics of applying AI to solve perennial accessibility challenges is complicated.

In this talk I will argue that frameworks from disability activism, like one that I developed for accessibility contexts called interdependence, are useful for understanding the experiences of people with disabilities and are also generative for considering ample factors while designing ethical and accessible information communication. To make this argument, I will overview a project concerning one facet of information accessibility鈥攔epresentation of people in human and AI-generated alternative (alt) text descriptions. I will share findings from interviews I conducted with blind people who rely on alt text to understand visual information and who also identified as a minoritized race or gender shown to be disproportionately misrepresented by AI-powered human recognition systems, similar to those which may be leveraged to automatically generate alt text. Their experiences and perspectives lent to alt text design considerations. I will conclude with my future program on developing a wider ethics of AI accessibility.

Bio: Cynthia Bennett is a Postdoctoral Fellow at Carnegie Mellon University鈥檚 Human-Computer Interaction Institute and a Researcher in Apple鈥檚 AI and Machine Learning organization. Her HCI research concerns the intersection of power, disability, design, and accessibility. Bennett is regularly invited to speak about her research; recent hosts include The Radical AI podcast and Apple鈥檚 Worldwide Developers Conference. She has received funding from the National Science Foundation, Microsoft Research, and University of Washington鈥檚 Human Centered Design and Engineering department, where she completed her Ph.D. She has published in top-tier HCI venues, and seven of these papers have received awards.

Expressive Computation: Integrating Programming and Physical Making

Speaker: Jennifer Jacobs
Tuesday, March 1, 11:30am - 12:30pm MT

Abstract: Creators in many different fields use their hands. Artists and craftspeople manipulate physical materials, manufacturers manually control machine tools, and designers sketch ideas. Computers are increasingly displacing many manual practices in favor of procedural description and automated production. Despite this trend, computational and manual forms of creation are not mutually exclusive. In this talk, I argue that by developing methods to integrate computational and physical making, we can dramatically expand the expressive potential of computers and broaden participation in computational production. To support this argument, I will present research across three categories: 1) Integrating physical and manual creation with computer programming through domain-specific programming environments. 2) Broadening professional computational making through computational fabrication technologies. 3) Broadening entry points into computer science learning by blending programming with art, craft, and design. Collectively, my research demonstrates how developing computational workflows, representations, and interfaces for manual and physical making can enable manual creators to leverage existing knowledge and skills. Furthermore, I鈥檒l discuss how collaborating with practitioners from art, craft, and manufacturing science can diversify approaches to knowledge production in systems engineering and open new research opportunities in computer science.

Bio: Jennifer Jacobs is Assistant Professor at the University of California Santa Barbara in Media Arts and Technology and Computer Science (by courtesy). At UCSB, she directs the Expressive Computation Lab, which investigates ways to support expressive computer-aided design, art, craft, and manufacturing by developing new computational tools, abstractions, and systems that integrate emerging forms of computational creation and digital fabrication with traditional materials, manual control, and non-linear design practices. Prior to joining UCSB, Jennifer received her Ph.D. from the Massachusetts Institute of Technology and was a Postdoctoral Fellow at the Brown Institute of Media Innovation within the Department of Computer Science at Stanford University. She also received an M.F.A. and B.F.A from Hunter College and the University of Oregon respectively. Her research has been presented at leading human-computer interaction research venues and journals including UIST, DIS, SIGGRAPH, and, most prominently, at the flagship ACM Conference on Human Factors in Computing Systems (CHI), where she received two best paper awards and one best paper honorable mention award in the past four years. As primary investigator, she has received two research grants in 2020 and 2021 from the National Science Foundation Division of Information and Intelligent Systems in computational fabrication for manufacturing and commercial craft.

Toward On-Body Health Monitoring and Highly Personalized Medicine

Speaker: Katherine Jinkins
Friday, February 25, 11:30am - 12:30pm MT

Abstract: Wearable or implantable biodevices enable continuous health monitoring and diagnosis of diseases or conditions in a fast, cost-effective, and accurate manner. These devices also allow the delivery of therapeutics, and subsequently create a new on-body realm of highly personalized medical treatment that can adapt to the dynamic nature of physiological processes. However, simultaneous control over the materials, electronics, and interface with the body, which is required for safe and conformal devices, has been difficult to achieve. In my research, I work at the intersections of materials design, bio-inspired engineering, nanomaterials assembly, and microelectronics to overcome this challenge.

In this talk, I will first outline a materials and device strategy for developing thermally switchable adhesives that interface wearable devices with the body. Implementing wireless control to modulate the adhesion strength of novel stimuli-responsive adhesives from a strong to a weak state eliminates the risk of damage to skin during removal, improving patient safety. Second, I will discuss a technique in which newly discovered liquid crystal phenomena are harnessed to assemble semiconducting carbon nanotubes into densely packed, highly aligned arrays, enabling nanotube field-effect transistors with unprecedented uniformity and performance across the wafer scale. These nanotube arrays enable high-performance logic and RF devices and promise to lead to next-generation flexible and wearable electronics. Finally, I will conclude by discussing new possibilities to develop future materials and electronics systems with programmable and stimuli-responsive functionalities for implants and drug delivery, as well as routes to exploit nanomaterials assembly for novel wearable and flexible devices, such as sweat microfluidics and biosensors.

Bio: Dr. Katherine R. Jinkins is a postdoctoral fellow at Northwestern University, where she works with Prof. John A. Rogers in the Querrey Simpson Institute for Bioelectronics. She received her Ph.D. in Materials Science in 2020 from the University of Wisconsin-Madison, where she was advised by Prof. Michael S. Arnold. She has received funding through the National Science Foundation Graduate Research Fellowship, the University of Wisconsin-Madison Distinguished Graduate Research Fellowship, and a Querrey Simpson Institute for Bioelectronics research grant to support her research.

Brain-Body Music Interfaces for Creativity, Education, and Well-Being

Speaker: Grace Leslie听
Thursday, February 24, 11:30am - 12:30pm MT

Abstract: Music is an important and universal means of communication. The feelings of connection and well-being that music creates are supported by a process in the brain and body called entrainment, in which our natural rhythms (speaking, walking, heartbeats, breathing, and even brain waves) synchronize with the rhythms we hear. The research activities I supervise at the Brain Music Lab at Georgia Tech expand on this powerful process by building software and hardware that translates brain and body rhythms into music and sound. I will review several music technologies that invite beneficial brain and body rhythms within and between listeners, and I will introduce the musical performance and composition practice I鈥檝e developed in concert with these technologies. For researchers, doctors, and caretakers, this work has the potential to expand our scientific understanding of music鈥檚 beneficial effects on the brain and body, and may lead to new music-based interventions for adults, children, and infants.

Bio: Grace Leslie is a flutist, electronic musician, and scientist. She develops brain-music interfaces and other physiological sensor systems that reveal aspects of her internal cognitive and affective state, those left unexpressed by sound or gesture, to an audience. Dr. Leslie is an Assistant Professor in the School of Music at Georgia Tech, where she directs the Brain Music Lab at the Center for Music Technology. Her research uses scientific analysis of EEG, ECoG, and physiological data to understand affective responses to music engagement. Additionally, she uses these experimental methods to engineer new musical interventions for health and well-being, including the development of musical brain-computer interfaces. Dr. Leslie was recently a fellow at the Neukom Institute for Interdisciplinary Computation at Dartmouth University, and a postdoctoral fellow in Rosalind Picard鈥檚 Affective Computing Group at the MIT Media Lab. She completed her PhD in Music and Cognitive Science at the University of California, San Diego, performing research with Scott Makeig at the Swartz Center for Computational Neuroscience.

Interactive Realities and Creative Spatial Interfaces

Speaker: Amy Bani膰
Tuesday, February 22, 11:30am - 12:30pm MT

Abstract: Extended Reality (XR), consisting of Virtual, Augmented, and Mixed Reality (VR/AR/MR) is a rapidly emerging and continuously changing field. Many researchers have shown benefits in using these types of immersive systems for spatial learning, physical and social training for high risk situations, virtual-to-real world knowledge transfer, creative expression, therapy for physical and mental health, aging populations, historic preservation, entertainment, fostering empathy, reducing implicit bias, entertainment, film, scientific analysis and discovery, and more. Typically, a head-mounted display and two joysticks might be used for these applications. This canned set is quickly becoming the commercial standard for virtual reality display kits that typically include a stereoscopic display that fosters visual immersion, stereo sound that fosters auditory immersion, as well as tracking and game-based input controllers that fosters spatial interaction. While this is a great advancement for closing the gap in widespread usage, a one-size-fits-all approach may limit the potential of immersive systems and user experiences.

So much of understanding human perception, movement, abilities, and limitations coupled with the design of the visual and illusionary cues, input devices, interaction techniques, registration, tracking, and output to our sensory channels influence how we, as humans, use, explore, and engage with immersive systems. My passion is to explore creative technologies for 3D User Interfaces/Interaction (3D UI) with Immersive Systems to understand how we interact better, are more engaged, or in more interesting ways with these systems to further improve learning, health, creativity, and workflow. This talk will present past and present research which I have supervised or collaborated on in the context of three areas fueling this passion: (1) understanding human abilities and designing techniques to positively influence human movement and interaction, (2) creative technologies for spatial interfaces, and (3) creative modalities for expression. As this talk develops, challenges and future research potentials will be discussed. The goal of this talk is to provide samples of research, yet leave the audience with more questions than answers, to inspire ideas, and to foster potential collaborations with students and faculty.

Bio: Amy Bani膰 is a Visiting Associate Professor at the ATLAS Institute this year. She is an Associate Professor in Computer Science at the University of Wyoming (UWyo) in Laramie, WY. Her research focuses on the design of 3D User Interfaces and Devices for Virtual / Augmented / Mixed Reality (XR) Environments, Immersive Visualizations, and Virtual Humans. Bani膰鈥檚 educational background is rooted in the intersection of design, computer graphics, and human-centered computing. Banic has a B.S. in Computer Science and B.A. in Studio & Digital Arts from Duquesne University in Pittsburgh, PA. She earned her M.S. and Ph.D. with the mentorship of IEEE Virtual Reality Career awardee Larry Hodges at the University of North Carolina at Charlotte by 2008. She furthered her career development as a Post-Doctoral Fellow at Clemson University, where she helped initiate the Virtual Environments Research Group in the School of Computing and Digital Production Arts Program. She joined the University of Wyoming in 2010 and has been developing her career there ever since.

At UWyo, she is the Director of the Interactive Realities Research laboratory, Co-Director of the new Center for Design Thinking at UWyo, faculty mentor of the UWyo InnoVRtors and Equality for Computing student groups, and holds a joint appointment at the Advanced Visualization Lab in CAES at the Idaho National Laboratory. Bani膰 enjoyed speaking as a keynote for the Workshop on Novel Input Devices and Interaction Techniques (NIDIT) in 2021. She served as general chair for the 3rd ACM Symposium on Spatial User Interaction in 2015 and general co-chair for the Rocky Mountain Celebration of Wyoming in Computing Conference in 2013. She organized multiple workshops and tutorials on interactive and volumetric immersive visualizations. She has served consistently on the program committee in various roles for the IEEE Virtual Reality and 3D User Interfaces Conferences since 2004.

Bani膰 is currently spending the year researching and teaching here at CU-糖心Vlog破解版 with ATLAS. She is collaborating on research projects with the ACME Lab, such as the AR Drum Circle. In Fall semester 2021 she taught Introduction to Virtual and Augmented Reality. In Spring 2022, she is teaching Creative Spatial Interfaces and Computer Animation with a focus on storytelling applied to 2D, 3D, and Immersive animations. Bani膰 is truly grateful for this opportunity to work with such creative and inspiring individuals at ATLAS and broadly at CU-糖心Vlog破解版!听听

Making the Future鈥擟onstructionist Tools for Critical Reflection and Social Action

Speaker: Nathan Holbert
Thursday, February 17, 11:30am - 12:30pm MT

Abstract: The many social crises currently persisting in and across our communities can be directly tied to educational challenges. Whether the disregard for science even in the face of a global public health emergency, the dehumanization of our fellow humans based on skin color or country of birth, or the lack of urgency to heal a dying planet, each issue points to a failure to educate a population in a way that not only promotes new and deeper ways of learning about these problems, but a sense of empowerment and possibility to address them. In my research, I aim to engage learners in future-building: to critically reflect on the state of the world today鈥攊ts challenges, successes, and failures鈥攁nd imagine and begin building new systems, technologies, and societies where all people can thrive. In this talk I will show how I go about iteratively creating and studying playful learning technologies, tools, and spaces that enable learners to use their unique perspectives and experiences to address issues of personal and communal importance.

Bio: Nathan听Holbert is an Associate Professor of Communication, Media, and Learning听Technologies Design at Teachers College, Columbia听University. His work involves听the development and study of playful tools, environments, and activities that听allow all children to leverage听computational power as they build, test, tinker,听and make sense of personally meaningful topics, phenomena, or questions. Nathan听received听his Ph.D. in the Learning Sciences from Northwestern University and is听the founder and director of the Snow Day Learning Lab. Nathan鈥檚听recent听publications include 鈥淎frofuturism as Critical Constructionist Design: Building听Futures from the Past and Present鈥 in Learning,听Media, and Technology, 鈥淭he听Case for Alternative Endpoints in Computing Education鈥 in the British Journal听of Educational Technology,听and 鈥淒esigning Educational Video Games to Be听Objects-to-Think-With鈥 in the Journal of the Learning Sciences. Nathan is also co-editor听of the volume听Designing Constructionist Futures: The Art, Theory, and Practice听of Learning Designs听published by MIT Press.

How to Investigate Creativity and Participation

Speaker: Andruid Kerne
Tuesday, February 15, 11:30am - 12:30pm MT

Abstract: Creativity and participation are vital, ineffable aspects of human experience. Creativity is essential to personal well-being and national innovation. Participation is essential to well-being, learning, and democracy. At the same time, performing scientific investigation of new technologies that support human experiences of creativity and participation is challenging, because they are nonlinear processes, characterized neither by singular correct answers nor by a one and only best practice. This complicates the role of data in establishing evidence and verifying findings. We need to understand what data methodologies enable what types of rigorous investigation of the effects of new technologies on human beings.

We present a series of studies exploring how new technologies impact creativity and participation, using data methodologies as an epistemological lens. The technologies span social media, spatial representations of information collections, algorithm-in-the-loop, embodied interaction, and games. Situated contexts of use span entertainment, crisis response, and education, involving engineering, architecture, and media arts. In formulating an epistemology of data methodologies, we contribute findings of the need for visual and textual qualitative data, in addition to quantitative, for studying the impacts of new technologies on ineffable aspects of human experience.

Bio: Andruid Kerne is a transdisciplinary human-computer interaction researcher and educator. His Interface Ecology Lab traverses boundaries to investigate possibilities and realities of how new technologies affect creativity, participation and inclusion in human activity. He holds a B.A. in applied mathematics / electronic media from Harvard, an M.A. in music composition from Wesleyan, and a Ph.D. in computer science from NYU. Kerne is a program director in the Information and Intelligent Systems division of the National Science Foundation, where he divides his time across five programs: Human-Centered Computing, Future of Work at the Human-Technology Frontier, Ethical and Responsible Research, and Emerging Technologies for Teaching and Learning programs. He is a Professor of Computer Science and Engineering at Texas A&M University and has published over 100 papers and raised over $3M in research funding. Kerne is a member of the steering committee of ACM Creativity and Cognition.

Build Cool Things That Matter

Speaker: Matt Carney
Tuesday, February 8, 11:30am - 12:30pm MT

Abstract: Engineers have magic powers: They can realize things into existence. But what you make matters, or at least it should matter to you, and to the people you build it for. In our ever-changing world, we need to focus our efforts to do good. In this talk, I will take you through some of the work I have been lucky enough to be a part of,听 from humanoid robots to bionics, pandemic response, and even some subversive art. All of this is to tell a story of how individual contributions are amplified by teams,听 and how even small teams can do big things.

Bio: Dr. Matt Carney is a research affiliate at the MIT Media Lab, Co-founder and CEO of Open Standard Industries, interim-CTO at the Aurelia Institute, and an Applied Scientist at Amazon Robotics. In his spare time, he also advises various hardware startups, and students looking for direction. He completed his PhD in 2020 at the MIT Media Lab Biomechatronics Group, where he developed high-performance, lower-extremity, powered prostheses. His technical leadership builds on more than 18 years of fast-paced development split between academics and industry where he has driven advancements in humanoid robotics, prostheses, medical devices, and clean-energy systems. Matt earned a Ph.D. and S.M. at the MIT Media Lab (2020, 2015), and multiple degrees in mechanical engineering from UC Berkeley (MS 2008), and Cal Poly (BS 2004). He has been a named shout-out in two TED talks, his work was shown in a third, and he has been an invited speaker at the Alpbach Forum, TEDx, EmTech France, Solidworks World. He is a named inventor on 5 issued US Patents, 10 academic publications, and has his PhD work displayed on a 2020 US Postage Stamp representing robotics innovation in the US. Matt is also a life-long bicycle commuter, long-distance hiker, and dabbles in creative spaces.

Galvanic Vestibular Stimulation is a Novel Approach to Alter Human Perception

Speaker: Torin Clark
Tuesday, February 1, 11:30am - 12:30pm MT

Abstract: Galvanic vestibular stimulation (GVS),applying low levels of current to the mastoids behind the ears, has long been known as an approach to artificially stimulate the vestibular system in the inner ear, which normally senses head orientation and motion. Recently, we have explored several applications of using GVS to modify human perception. First, by applying low levels of white noise, it is possible to produce stochastic resonance (SR) in the vestibular system. SR is a mechanism in which a noisy waveform resonants with a signal (in the vestibular system, physical self-motion) in a non-linear, dynamical system to enhance information through put, and has been observed in several human sensory systems, such as visual and tactile perception. Here, we find the low levels of white electrical noise applied to the mastoids can improve vestibular perceptual thresholds(i.e., how small of a self-motion a person can reliably perceive). The improvement is consistent with the theory of SR in which as more noise is added, thresholds improve as the noise resonants with the signal, but eventually too much noise is added and performance is degraded. More recently, we have explored cross-modal SR, in which white noise is added in one sensory channel (e.g., GVS) and actually improves perception in other sensory channels(e.g., visual, tactile, or auditory perception), potentially through resonance in more centrally located multi-modal neurons.

As an alternative use of GVS, we have applied supra-threshold, non-noisy waveforms to modulate human perception of self-motion. In addition to making a stationary individual feel they are moving, we are exploring whether GVS can be applied concurrently and coherently with self-motion to alter or reduce the sensation of self motion. Reducing the sensation of motion when actual motion is unavoidable (in the backseat of a car, on a boat) can potentially reduce the severity of motion sickness. Finally, we are exploring the potential of using GVS as a novel display modality. In many human operator domains (e.g., aerospace cockpit) the visual and auditory sensory channels are saturated, motivating the use of novel display modalities. GVS is an interesting option to transfer information via the vestibular system, which is sensitive to the magnitude, direction, and characteristics of the waveform. Here, we demonstrate that humans can reliably perceive differences in two cues which differ in waveform frequency, providing a means for information transfer via GVS, which we found is robust to various environmental conditions (walking, standing, moving like in a vehicle, or being in a loud room). Further, by using short "bursts" of moderate frequency (e.g., 50 Hz), we are able to avoid the disorienting sensation of self-motion. These various applications of GVS suggest some potential promise for operational use, though we will also discuss critical limitations.

Bio: Torin K. Clark, PhD, is an Assistant Professor at the University of Colorado-糖心Vlog破解版 in the Smead Aerospace Engineering Sciences department and Biomedical Engineering program. He is a principal investigator in the Bioastronautics Laboratory and a faculty affiliate of BioServe Space Technologies. Prior to joining CU-糖心Vlog破解版 in 2015, he was a National Space Biomedical Research Institute post-doctoral fellow at Harvard Medical School and the Massachusetts Eye and Ear Infirmary. He completed his Masters and PhD in the Man-Vehicle Laboratory (now the Human Systems Laboratory) at the Massachusetts Institute of Technology, and his BS in Aerospace Engineering at the University of Colorado. His research is focused on the challenges that human operators face in complex aerospace environments. Specifically, he focuses on astronaut biomedical issues, space human factors, human sensorimotor/vestibular function and adaptation, interaction of human-autonomous and human-robotic systems, trust in autonomous systems, mathematical models of spatial orientation perception, and human-in-the-loop experiments.

听Profile

Is It Possible to Disrupt a Cow?

Speaker: Seth Miller
Tuesday, January 25, 11:30am - 12:30pm MT

Abstract: This talk will, as the title promises, talk a lot about cows. But really the subject is how we can harness the forces of innovation to steer our way out of the climate crisis, using the cattle industry as the example. In the talk I will show why cows - specifically, enteric fermentation - are considered problematic. I will define the term 鈥榙isruption鈥 so that we all know what the word means, and then I will do a deep dive into whether one technology - "meat alternatives" - can truly be considered disruptive. Finally, I will present a framework to show to rigorously examine any industry for opportunities for technological disruption, and walk through its implications for how we should address the challenge of cows in particular.

Bio: Dr. Seth Miller is President of Heron Scientific, a boutique consulting company specializing in innovation planning for companies leveraging cutting edge chemistry and materials science. Dr. Miller has worked as a technology leader in an unusually wide ranging number of fields, including serving as founding CEO of ClearMark Systems, a developer of anti-counterfeiting software for DARPA; CSO of Fluonic, a microfluidic flow sensor for medical infusion; and CSO of EverSealed, a developer of vacuum sealed windows. He also served as CTO of Technology Reserve, an IP licensing company, and Managing Director of Xinova, an open innovation platform. Dr Miller is author or co-author on 93 issued US patents, and received a Ph.D. in chemistry from the California Institute of Technology in 1998.

The Science of Where: Mapping Your Pathway Forward with Geotechnologies

Speaker: Joseph Kerski
Tuesday, January 18, 11:30am - 12:30pm MT

Abstract: All major 21st Century issues are spatial in nature, complex, and cross disciplinary, physical, and political boundaries. These issues, from natural hazards, equity, energy, water, habitat, biodiversity, supply chain, and more, can be understood and solved using modern cloud based geotechnologies. Geotechnologies include geographic information systems (GIS), remote sensing, GPS, and dynamic IoT-fed web maps. Mapping and analysis skills, along with understanding issues of ethics and location privacy, should be on every ATLAS program participant鈥檚 toolbelt.

Join geographer and educator Joseph Kerski as we discuss the forces, trends, and skills needed for you to chart your own pathway forward with web mapping tools, spatial data, crowdsourcing field projects, story maps, and other compelling and engaging tools that will empower you to be a change agent in your community and in your world.

Bio: Joseph Kerski is a geographer with a focus on the use of Geographic Information Systems (GIS) in education. He has served as the President of the National Council for Geographic Education and has given 2 TED Talks on 鈥淭he Whys of Where鈥. He holds 3 degrees in geography and has served as geographer in 4 sectors of society, including government (NOAA, US Census Bureau, USGS), academia (Penn State, Sinte Gleska University, University of Denver, others), private industry (as Education Manager for Esri), and nonprofit organizations (with roles in geography and education associations).

Joseph authored over 75 chapters and articles on GIS, education, and related topics, and visits 35 universities annually. He conducts professional development for educators. He has created 5,000 videos, 750 lessons, 1,000 blog essays, and authored 8 books, including Interpreting Our World, Essentials of Environment, Spatial Mathematics, Tribal GIS, International Perspectives on Teaching and Learning, and the GIS Guide to Public Domain Data. But as a lifelong learner, he feels as though he鈥檚 just getting started and thus actively seeks mentors, partners, and collaborators.听

Climate Change and Energy Transformation

Speaker: Tim Schoechle
Tuesday, January 11, 11:30am - 12:30pm MT

Abstract: This colloquium deals with the topic of climate change and the issues around the needed transformation of our global energy and electricity economy and technology. The topics addressed include:

1. Energy Transformation and climate change (high-level view of climate and energy)

2. Electricity transition: distributed vs. centralized (mid-level view of electricity)

3. Distributed solar-plus-storage and microgrids: Is it key to resilience; how would it work?

A purpose of this colloquium is to assess the level of interest in further, more 鈥渄eep-dive鈥 colloquia, workshops, or courses (e.g., ATLS 5440 Design Studio) on this broad, multi- faceted, and rapidly evolving topic鈥攙ital to the future of humanity.

Bio: Dr. Timothy Schoechle is an international consultant in computer and communications engineering and in technical standards development. He presently serves as Secretary of ISO/IEC SC25 Working Group 1, the international standards committee for Home Electronic System and is a technical co-editor of several new international standards related to smart buildings, and he currently participates in a range of national and international standards bodies related to distributed energy and solar-plus-storage technology and policy issues. As an entrepreneur, Dr. Schoechle has engineered the development of electric utility premises gateways and energy management systems for over 25 years and has played a major role in the development of technical standards for smart meters and advanced metering infrastructure (AMI). He is currently an active participant of the GridWise Architecture Council (GWAC) hosted by the Pacific Northwest National Laboratories (PNNL), U.S. Department of Energy, and authored technical papers presented at six consecutive GWAC/Department of Energy-sponsored Grid-Interop technical conferences from 2007 through 2012.

Dr. Schoechle is a former faculty member of the University of Colorado College of Engineering and Applied Science. He was a co-founder of BI Incorporated, presently a $1 billion company in 糖心Vlog破解版, Colorado, a pioneer developer of RFID technology. He holds an M.S. in telecommunications engineering (1995) and a Ph.D. in communication policy (2004) from the University of Colorado, 糖心Vlog破解版.