With 133,686 graduates, the interaction design foundation is the biggest online design school globally. We were founded in 2002. Lydia chilton is involved with design classes and programming at columbia engineering. Two current projects are constructing visual metaphors for creative ads and using. Two current projects are constructing visual metaphors for creative ads and using computational tools to write humor and news satire. Publications, bio, bibliography, etc. Download our free ebook the basics of user experience design to learn about core concepts of ux design. In 9 chapters, we’ll cover:
Lydia chilton is involved with design classes and programming at columbia engineering. Two current projects are constructing visual metaphors for creative ads and using. Two current projects are constructing visual metaphors for creative ads and using computational tools to write humor and news satire. Publications, bio, bibliography, etc. Download our free ebook the basics of user experience design to learn about core concepts of ux design. In 9 chapters, we’ll cover: Conducting user interviews, design thinking, interaction design, mobile ux design, usability, ux research, and many more! Design process by prof. Student experience with design and teamwork. Practice iterative design to meet specific user needs.
We were founded in 2002. Lydia chilton is involved with design classes and programming at columbia engineering. Two current projects are constructing visual metaphors for creative ads and using. Two current projects are constructing visual metaphors for creative ads and using computational tools to write humor and news satire. Publications, bio, bibliography, etc. Download our free ebook the basics of user experience design to learn about core concepts of ux design. In 9 chapters, we’ll cover: Conducting user interviews, design thinking, interaction design, mobile ux design, usability, ux research, and many more! Design process by prof.
Early low‐fidelity prototype of the design interface | Download
Priya Pai
My Designs
6.831 L21: Coordinate Transforms & Clipping
SpineAlign UI Design: 1st Place Award at DevFest 2019 (made with Sketch)
COVID-19 Design Challenge
Flow chart illustrating the main actions of the game engine | Download
Discussion interface for use in CICERO, inspired by instantmessaging
Graphics & User Interfaces | Department of Computer Science, Columbia
Burkholderia cenocepacia Mechanism of Infection (made with Google Drawings)
UC San Diego's ProtoLab
(PDF) Seaweed: a web application for designing economic games
Filet-o-Fish
6.831 L21: Coordinate Transforms & Clipping
Like my work? Let's keep in touch.
UC San Diego's ProtoLab
Graphics & User Interfaces | Department of Computer Science, Columbia
Video Gallery For Lydia Chilton Ui Design
AI and Design with Dr. Lydia Chilton
Design is the art of making change in the world - it's an incredibly impactful skill, but one that is hard to teach, learn and practice. How can AI help us be better teach and practice design? Professor Chilton will introduce a research project that helps designers overcome the fundamental cognitive challenges of design and talk about design work done by students at Columbia.
Lydia Chilton is an Assistant Professor in Computer Science at Columbia University. She is an early pioneer in decomposing and crowdsourcing complex tasks. Her current research uses the combined talents of people and AI to solve design problems that neither could solve alone.
EasyFit - Demo Figma Prototype
This video shows a prototype for a website I created on Figma.
UI Design 4170 - Lydia Chilton
Design at Columbia: Day 1 Understand
Phase 1: "Understand
0:01 Week 1 Kickoff (30 min)
Laura Block CC'20, [email protected] Lead
27:00 Understanding Your Problem Space: Competitor Landscape,
Perrin Anto, Interaction Designer @ Google
1:16:16 User Research & Interviews,
Aleksei Igumnov, UX Research Intern @ Apple
Design at Columbia: Day 4 Prototype
0:01-0:30 Project Meeting, Phase 4: Prototype
Laura Block CC'20, [email protected] Lead
0:35-1:15: Visual Information Design
Lydia Chilton, Columbia Professor in Computer Science
35:35 Pivoting to Design & Portfolio First Look (30min)
Laura Block CC'20, [email protected] Lead
1:09:30 Synthesizing Your User Research: Archetypes & Journey Maps (50min)
Perrin Anto SEAS'20, Interaction Designer @ Google
AI for All
In this exclusive webinar created for Alumni Weekend Reinvented, Augustin Chaintreau, Associate Professor of Computer Science, Lydia Chilton, Assistant Professor of Computer Science, and Brian Smith, Assistant Professor of Computer Science, explore how A.I. and assistive technologies can be utilized to address issues of equality and justice for all.
Eliciting Gestures for Novel Note-taking Interactions
Eliciting Gestures for Novel Note-taking Interactions
Katy Ilonka Gero, Lydia B Chilton, Chris Melancon, Mike Cleron
DIS'22: ACM SIGCHI Conference on Designing Interactive Systems (DIS)
Session: Video Previews
Abstract
Handwriting recognition is improving in leaps and bounds, and this opens up new opportunities for stylus-based interactions. In particular, note-taking applications can become a more intelligent user interface, incorporating new features like autocomplete and integrated search. In this work we ran a gesture elicitation study, asking 21 participants to imagine how they would interact with an imaginary, intelligent note-taking application. Participants were prompted to produce gestures for common actions such as select and delete, as well as less common actions (for gesture interaction) such as autocomplete accept/reject, `hide', and search. We report agreement on the elicited gestures, finding that while existing interactions are prevalent (like double taps and long presses) a number of more novel interactions (like dragging selected items to hotspots or using annotations) were also well-represented. We discuss the mental models participants drew on when explaining their gestures and what kind of feedback users might need to move to more stylus-centric interactions.do
Microproductivity: Getting Big Things Done Using Smaller Moments
In today’s world, people have to attend to a number of tasks near simultaneously, and with the widespread use of mobile devices, tasks can be tackled almost anywhere at any time. It is not surprising, then, that being able to address any one task for an extended period is becoming increasingly difficult. A new research area is focusing on “microproductivity,” breaking larger tasks down into manageable components conducive to small moments throughout the day. In this breakout session, we bring together experts from academia and the product side to share their vision of a future where traditional tasks can be accomplished via both focused attention and microproductivity. We will unpack how microproductivity may manifest across different domains and scenarios, identify key challenges in designing for microproductivity, discuss how expected outcomes may be impacted, and put forward an agenda that can move the field toward real-life adaptation.
See more at microsoft.com/en-us/research/event/faculty-summit-2019/
Designing for Public Safety: What We Learned from UX Design Studies of the NYS Contact Tracing App
UX designers from Tech: NYC along with faculty from Columbia Business School, the Fu Foundation School of Engineering and Applied Science, and the Data Science Institute discuss the ongoing development of the New York State contact tracing app. The talk included the presentation of findings from a weeklong research study conducted jointly with faculty from CBS, SEAS, and the Data Science Institute as well as designers from Tech: NYC and the New York State Department of Health.
UX+Data Computation as Your Experience Design Superpower
The best aircraft pilot could never have navigated to the moon because the flight required calculations that outstripped human capacities. The best experience designers of the future will be asked to create solutions for increasingly diverse and sophisticated design spaces and they'll similarly need new superpowers to tackle those problems.
Computational design – the ability to use algorithms to predict, explain, and even shape user behavior – is that new superpower.
Invision recently released their prediction for 5 trends UX designers should know for 2020 ( invisionapp.com/inside-design/2020-design-trends/). Leading that list is computational design.
Robb Beal, a product design leader with a passion for stats and creating data-centric experiences, will present a series of case studies that explore this superpower in use. He'll look at examples including:
• how state-of-the-art, data-driven UI optimization services are serving designers/researchers at computational leaders like Google
• how computational approaches are facilitating the creation of new, magical types of creative design tools for marketing experiences with emphasis on recent applied work by researchers at Columbia University's Computational Design Lab
• how data-driven optimization is helping shape and define the new visual and interactive UI components being invented for immersive platforms and applications at Facebook Reality Labs, Microsoft HoloLens, and others
Speaker Bio:
Robb Beal is a veteran, nerdy product design leader with a passion for exploring the frontiers of user experience and welcoming others to the most interesting of those frontiers. Robb got his start in tech at Apple as a field product manager for a web app platform before going on to design and product manage a hit consumer macOS app that blended the best of native UIs with web services. Along the way he also designed pioneering social software and helped build design teams and led major redesigns for top-tier analytics and consumer communications startups, F500 Graybar, Scottrade and others.
SymbolFinder: Brainstorming Diverse Symbols Using Local Semantic Networks
SymbolFinder: Brainstorming Diverse Symbols Using Local Semantic Networks
Savvas Petridis, Hijung Valentina Shin, Lydia B Chilton
UIST'21: ACM Symposium on User Interface Software and Technology
Session: Summarization & Semantics
Abstract
Visual symbols are the building blocks for visual communication. They convey abstract concepts like reform and participation quickly and effectively. When creating graphics with symbols, novice designers often struggle to brainstorm multiple, diverse symbols because they fixate on a few associations instead of broadly exploring different aspects of the concept. We present SymbolFinder, an interactive tool for finding visual symbols for abstract concepts. SymbolFinder molds symbol-finding into a recognition rather than recall task by introducing the user to diverse clusters of words associated with the concept. Users can dive into these clusters to find related, concrete objects that symbolize the concept. We evaluate SymbolFinder with two studies: a comparative user study, demonstrating that SymbolFinder helps novices find more unique symbols for abstract concepts with significantly less effort than a popular image database and a case study demonstrating how SymbolFinder helped design students create visual metaphors for three cover illustrations of news articles.
Video Previews of the UIST 2021 Technical Papers Program
Stanford Seminar: Computational Ecosystems
Haoqi Zhang
Northwestern University
Dynamic professionals sharing their industry experience and cutting edge research within the human-computer interaction (HCI) field will be presented in this seminar. Each week, a unique collection of technologists, artists, designers, and activists will discuss a wide range of current and evolving topics pertaining to HCI.
Learn more about Stanford's Human-Computer Interaction Group: hci.stanford.edu
Learn about Stanford's Graduate Certificate in HCI: online.stanford.edu/programs/human-computer-interaction-graduate-certificate
View the full playlist: youtube.com/playlist?list=PLoROMvodv4rMyupDF2O00r19JsmolyXdD&disable_polymer=true
0:00 Introduction
2:40 Best human solution
3:17 Best machine solution
3:57 Options
5:09 A call for systems: having great components is not enough.
5:53 A call for systems thinking in HCI
7:11 Advancing the approach...
9:01 Computational ecosystems are systems, designed as integrative solutions
10:12 Rest of the talk
13:50 Challenges for organizers
15:20 Cobi: Community-informed planning
15:48 1. Engage the entire community in the planning process
17:33 Core idea: two-phased collaborative planning process w/ crowds and groups
19:32 Core idea: incentive chaining
21:00 2. Help organizers resolve conflicts
22:28 Core idea: Community-informed mixed-initiative interface
24:44 Computational Ecosystem: Community-Informed Planning
28:50 Students need regulation skills
30:26 Agile Research Studio ARS
30:54 ARS scales faculty time
32:00 ARS is a computational ecosystem for developing regulation skills
34:17 ARS: planning
37:39 Distributed help is not one tool...
38:56 Outcomes 3 yrs
39:49 Planning Strategies
40:11 Help & Help-seeking
41:04 Faculty Time: 10-12 hours/week
42:00 Computational Ecosystem: Agile Research Studios
42:36 Regulation skills beyond ARS?
45:25 What's next
45:35 Preview #1: Ecosystem-level architectures
45:55 Example: On-the-go crowdsourcing
48:32 Example: Readily Available Learning Experiences RALE
50:30 a leap: mixed-initiative scaffolds
51:29 Role of technology in advancing human values al scale
52:10 Scaling amplifies compromise
56:01 Delta Lab
VisiFit: Structuring Iterative Improvement for Novice Designers
VisiFit: Structuring Iterative Improvement for Novice Designers
Lydia B Chilton, Ecenaz Jen Ozmen, Sam H Ross, Vivian Liu
CHI '21: The 2021 ACM CHI Conference on Human Factors in Computing Systems
Session: Computational Design
Abstract
Visual blends are an advanced graphic design technique to seamlessly integrate two objects into one. Existing tools help novices create prototypes of blends, but it is unclear how they would improve them to be higher fidelity. To help novices, we aim to add structure to the iterative improvement process. We introduce a method for improving prototypes that uses secondary design dimensions to explore a structured design space. This method is grounded in the cognitive principles of human visual object recognition. We present VisiFit – a computational design system that uses this method to enable novice graphic designers to improve blends with computationally generated options they can select, adjust, and chain together. Our evaluation shows novices can substantially improve 76% of blends in under 4 minutes. We discuss how the method can be generalized to other blending problems, and how computational tools can support novices by enabling them to explore a structured design space quickly and efficiently.
Pre-recorded Presentations for the ACM CHI Virtual Conference on Human Factors in Computing Systems, May 8-13, 2021
Design Guidelines for Prompt Engineering Text-to-Image Generative Models
Design Guidelines for Prompt Engineering Text-to-Image Generative Models
Vivian Liu, Lydia B Chilton
CHI'22: ACM Conference on Human Factors in Computing Systems
Session: Natural Language
Abstract
Text-to-image generative models are a new and powerful way to generate visual artwork. However, the open-ended nature of text as interaction is double-edged; while users can input anything and have access to an infinite range of generations, they also must engage in brute-force trial and error with the text prompt when the result quality is poor. We conduct a study exploring what prompt keywords and model hyperparameters can help produce coherent outputs. In particular, we study prompts structured to include subject and style keywords and investigate success and failure modes of these prompts. Our evaluation of 5493 generations over the course of five experiments spans 51 abstract and concrete subjects as well as 51 abstract and figurative styles. From this evaluation, we present design guidelines that can help people produce better outcomes from text-to-image generative models.
WEB:: chi2022.acm.org/
Pre-recorded presentations of CHI 2022
Hierarchical Summarization for Longform Spoken Dialog
Hierarchical Summarization for Longform Spoken Dialog
Daniel Li, Thomas Chen, Albert Tung, Lydia B Chilton
UIST'21: ACM Symposium on User Interface Software and Technology
Session: Brushing, Talking, and Virtual Conferencing
Abstract
Every day we are surrounded by spoken dialog. This medium delivers rich diverse streams of information auditorily; however, systematically understanding dialog can often be non-trivial. Despite the pervasiveness of spoken dialog, automated speech understanding and quality information extraction remains markedly poor, especially when compared to written prose. Furthermore, compared to understanding text, auditory communication poses many additional challenges such as speaker disfluencies, informal prose styles, and lack of structure. These concerns all demonstrate the need for a distinctly speech tailored interactive system to help users understand and navigate the spoken language domain. While individual automatic speech recognition (ASR) and text summarization methods already exist, they are imperfect technologies; neither consider user purpose and intent nor address spoken language induced complications. Consequently, we design a two stage ASR and text summarization pipeline and propose a set of semantic segmentation and merging algorithms to resolve these speech modeling challenges. Our system enables users to easily browse and navigate content as well as recover from errors in these underlying technologies. Finally, we present an evaluation of the system which highlights user preference for hierarchical summarization as a tool to quickly skim audio and identify content of interest to the user.
Video Previews of the UIST 2021 Technical Papers Program
Hierarchical Summarization for Longform Spoken Dialog
Hierarchical Summarization for Longform Spoken Dialog
Daniel Li, Thomas Chen, Albert Tung, Lydia B Chilton
UIST'21: ACM Symposium on User Interface Software and Technology
Session: Brushing, Talking, and Virtual Conferencing
Abstract
Every day we are surrounded by spoken dialog. This medium delivers rich diverse streams of information auditorily; however, systematically understanding dialog can often be non-trivial. Despite the pervasiveness of spoken dialog, automated speech understanding and quality information extraction remains markedly poor, especially when compared to written prose. Furthermore, compared to understanding text, auditory communication poses many additional challenges such as speaker disfluencies, informal prose styles, and lack of structure. These concerns all demonstrate the need for a distinctly speech tailored interactive system to help users understand and navigate the spoken language domain. While individual automatic speech recognition (ASR) and text summarization methods already exist, they are imperfect technologies; neither consider user purpose and intent nor address spoken language induced complications. Consequently, we design a two stage ASR and text summarization pipeline and propose a set of semantic segmentation and merging algorithms to resolve these speech modeling challenges. Our system enables users to easily browse and navigate content as well as recover from errors in these underlying technologies. Finally, we present an evaluation of the system which highlights user preference for hierarchical summarization as a tool to quickly skim audio and identify content of interest to the user.
SymbolFinder: Brainstorming Diverse Symbols Using Local Semantic Networks
SymbolFinder: Brainstorming Diverse Symbols Using Local Semantic Networks
Savvas Petridis, Hijung Valentina Shin, Lydia B Chilton
UIST'21: ACM Symposium on User Interface Software and Technology
Session: Summarization & Semantics
Abstract
Visual symbols are the building blocks for visual communication. They convey abstract concepts like reform and participation quickly and effectively. When creating graphics with symbols, novice designers often struggle to brainstorm multiple, diverse symbols because they fixate on a few associations instead of broadly exploring different aspects of the concept. We present SymbolFinder, an interactive tool for finding visual symbols for abstract concepts. SymbolFinder molds symbol-finding into a recognition rather than recall task by introducing the user to diverse clusters of words associated with the concept. Users can dive into these clusters to find related, concrete objects that symbolize the concept. We evaluate SymbolFinder with two studies: a comparative user study, demonstrating that SymbolFinder helps novices find more unique symbols for abstract concepts with significantly less effort than a popular image database and a case study demonstrating how SymbolFinder helped design students create visual metaphors for three cover illustrations of news articles.