process – DH LAB https://dhlab.lmc.gatech.edu The Digital Humanities Lab at Georgia Tech Tue, 13 Jul 2021 02:57:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 41053961 Data by Design: Code Reuse and Visualizations https://dhlab.lmc.gatech.edu/process/data-by-design-code-reuse-and-visualizations/ https://dhlab.lmc.gatech.edu/process/data-by-design-code-reuse-and-visualizations/#comments Tue, 09 Feb 2021 02:08:06 +0000 https://dhlab.lmc.gatech.edu/?p=1150 When I joined, the team had done a considerable amount of research, design, and requirements exploration in addition to our small prototype. One of the ideas continually emphasized was that the book would be heavily data-driven. This can be a bit of a buzzword and a swiss-army-knife of a term – the first two D’s in the popular web visualization tool D3 are “data-driven,” for a technical example, while data-driven can characterize approaches in other disciplines, like data journalism.

Data by Design is situated in a broad meaning of the category – we’re data-driven in the sense that the scope of the book is driven by contours in the history of data, and the research follows data and the people that wield it. But in addition to researching data and using data for research, we make our argument by presenting data, rendering it visually throughout the book. These include recreations of historical data visualizations and new takes on them: new visualizations of old data and historical visualizations swapped out with new data. So we’re also data-driven in the technical sense: the book is filled with interactive web components that render and depend upon collections of data.

That characterizes a common task throughout development: in every chapter, we need to build components that take some dataset and present it visually. Software engineering is all about abstraction and generalization—if there’s a repeated task, find a way not to repeat yourself—and while this might not sound like a lot of repetition (after all, there’s a big difference between the visualizations of Peabody and Playfair), from the development side, the visualizations have much in common. They each do all or many of the following:

  • Require a data source to be loaded
  • Reformat that data in some way
  • Contain subcomponents that also need to see the data
  • Respond to events (think of these as messages) from subcomponents and send events back out to the chapter
  • Mutate the data (after user interaction)
  • Allow the user to drag it into the notebook
  • Save the mutated data to the notebook server when dragged into the notebook
  • Show up in the chapter timeline

This adds up to a sizeable chunk of functionality that can be shared across visualization components so we don’t have to reimplement these features in every new visualization.

In object-oriented programming, there are two common ways of sharing groups of functionality among objects: composition and inheritance. With composition, the objects each have a copy of an object that does the desired functionality; with inheritance, the objects declare themselves as being a version of the object that does the desired functionality, and thus “inheriting” its functionality, thanks to language features like subclassing. In this example, if we use composition, the visualization components would each create an instance of some helper object that has the features implemented, and it would call the helper’s methods when necessary.

A diagram of the Peabody Chart and Playfair Graph related to a VisualizationHelper class through composition

If we use inheritance, we’d declare each visualization component as being a Visualization, which would be thought of as a “parent” object, and then our visualization components would automatically contain all the functionality that was in the parent. (In many OOP languages these are called “superclasses,” or “abstract superclasses” when the parent is simply a template that depends on the child to flesh it out.)

A diagram of the Peabody Chart and Playfair Graph related to a Visualization class through inheritance

Ever since the initial prototypes, Data by Design has used the JavaScript framework Vue.js. When I say “component,” I refer to the building blocks of a Vue application (somewhat analogous to “class” which is the building block of many OOP languages). I won’t go into all of the elements in a component, but they include:

  • data, which are reactive properties that can be referenced and updated throughout the component and its UI. For example, the interactive Peabody grid uses a data field to keep track of and respond to the current pixel that the user is hovering over.
  • props, which are like data, but they’re passed from the parent component to the child and can’t be changed by the child. That grid takes a “century” prop so it knows what year the first square of the grid represents.
  • methods, pieces of functionality that it can call and reuse. The Peabody grid calls one of its methods every time the user hovers their mouse over it.
  • lifecycle hooks, which set up the component to trigger pieces of functionality when something happens in the program. For example, a component might use the mounted lifecycle hook, which triggers when the component is added to the page, to register itself in the chapter timeline when it first becomes visible.

The newest release, Vue 3, ships with a Composition API to make it as easy as possible to share code across components using composition. OOP design patterns tend to favor composition over inheritance: composition can be easier to maintain, it avoids the rigid, predetermined taxonomy required by inheritance, it always allows for multiple helper objects (inheritance frowns upon multiple parent objects), and it even allows for techniques that swap out the helper object for another at runtime.

The Composition API brings a whole suite of code-sharing features to Vue that I’ve enjoyed using in newer projects, but most current Vue projects, Data by Design included, are built on Vue 2. A primary way to share functionality among components in Vue 2 is its mixin feature, which is similar to inheritance. Instead of building a “parent” object, you build a mixin object. This is written like any other Vue component and can have all the features that a component can, but it can’t be used directly. Instead, a component can declare that it is using that mixin (or any number of mixins), in which case all the data, methods, and hooks in the mixin are merged with (or “mixed into”) the component.

This lets us do something that composition often can’t: completely move functionality out of the way of the child components. When our team builds a new component, I don’t want us to have to think about registering it in the timeline or making it draggable or figuring out how to send data to subcomponents: if it isn’t unique to the component, I want it to be done automatically. With composition, you’d still have to explicitly configure the appropriate hooks, even if that configuration is a single call to a helper object. This is preferred in many applications: with composition, you can involve functionality as needed, without having to include things that aren’t. But in this case, aside from prioritizing an easy developer experience, we do want to enforce all of the functionality. In other words, we want to make sure that every Visualization does certain things and has certain capabilities. It’s a case where inheritance is indeed desired over composition.

The mixin is imported and added to a component using the mixins option in Vue:

And then the visualization automatically gains all of the functionality in the mixin. Any props that that are specified in the Visualization mixin are now accepted by the child component, any data and methods created in the mixin are accessible, and its lifecycle hooks are registered.

So for the Visualization mixin:

  • The props allow the component to take the name of a static dataset (which is to be grabbed from the server) and/or a mutable dataset (which is to later be sent to the server as part of the notebook). There’s also a width prop, which allows the component to base the size of visualizations on a maximum width that’s determined by the chapter.
  • The hooks make sure that when the component is created, the specified datasets are downloaded or registered, and when the component is mounted (i.e., when it appears in the page) it creates a draggable icon in the corner for dragging the visualization into the notebook and lets the chapter timeline know.
  • The data (really, computed properties) allow the child to view the dataset and various details about how it is registered.
  • The various helper methods allow the component to transform the data, easily register events, and create lengths based off of the passed-in width.)
  • Additionally, many of the properties and methods are set up to be injected further down the component tree. That means that they aren’t just accessible by the component that’s been directly “mixed” with the mixin; any component contained by it can request them.

Many of the Visualization mixin’s roles rely on its communication with various Vuex state modules. In other words, the Visualization mixin doesn’t keep track of all the visualizations and all the data itself; its job is to coordinate with those systems. I’ll talk about how these modules work in a future post, as this one’s getting long enough!

You can view the full code here.

]]>
https://dhlab.lmc.gatech.edu/process/data-by-design-code-reuse-and-visualizations/feed/ 1 1150
Walking Through The Project https://dhlab.lmc.gatech.edu/floorchart/walking-through-the-project/ https://dhlab.lmc.gatech.edu/floorchart/walking-through-the-project/#respond Fri, 09 Feb 2018 20:15:49 +0000 http://dhlab.lmc.gatech.edu/?p=648 We are three new students who joined DILAC lab and are continuing where the project left off from the summer semester. Much of the fall 2017 semester has been a catch up phase to learn about how the project got to its current state and what needs to be done from here. During this process we have spent significant time filling the gaps of knowledge surrounding this project — we will fill you all in too!

Testing the Neopixels

During our learning process, specifically investigating the hardware setup, we briefly turned on the LEDs to see the current conditions they were in. Since some of the wires were no longer connected and seemingly in chaotic order, we merely wanted to see if the LEDs even turned on. Indeed the LEDs turned on but were not lighting how we expected: the colors were random and uneven in intensity. The code suggested they all be uniform at least in light intensity if not also in color. So now we came across an issue of whether or not the LEDs were burnt out from the disconnected wiring, miswiring caused by our investigations, power overload, or just bad LEDs.

To find out the cause, we tested each strip individually, with a different arduino from the project, with RGB color strand test code. Through the testing, only one strip seemed to have discoloration and the cause seemed highly probable that the wiring was weak.

This was great news! Wiring is a simple fix and we had already decided to rewire the hardware setup due to inconsistent coloring choices. We wanted to create a more organized system and by doing so, we can confirm if the one LED strip was discolored due to poor wiring or if it is something else.

During this process of testing we were also able to eliminate a possible cause of a low battery supply/overpowered supply. By using a brand new 9V battery, we compared one strip’s intensity between the two batteries. The wires had to be double checked for connections in the process of changing the batteries out. Once all wires were checked to be in order, the LEDs showed no difference in intensity.

Understanding how the system works

Currently, there are two main components to the project: the membrane touchpad and corresponding LEDs. When someone touches a square on the touchpad, the corresponding LED should come on. Eventually, the system will entail changing the LEDs to specific colors, but we are aiming for “on” and “off” phases first.

How this works:

The Membrane Touchpad

The touchpad consists of one “master” Mega Arduino board linked to two “minion” Mega Arduino boards. The master board is set in the middle with one minion board on either side.

Each minion is connected to 30 rows of copper tape, but only at half-lengths; each minion is also connected to 15 full-length columns of copper tape. Minion 1 is connected to columns 1 thru 15; Minion 2 is connected to columns 16 thru 30. “Buttons” are created by the intersections from the copper tape columns with the rows. Each minion is in charge of detecting 450 “buttons.” The minions then calculate the position of the “button” pressed and send this information to the master. This interaction is generated through the use of the keypad library that is available through open-source.

Issue: The keypad library is not available for such a large scale matrix. Our team will have to modify the original library to create an applicable and successful library for our project.

Pictured is a physical picture of the layout of the Master and Minion boards.

Top: Close up of the copper tape connection to Minion 1. Bottom: Close up of Minion 2.

Each minion is connected via I2C bus to the master; the connection is made via pins 20 and 21 on the master: one pin is a data connection and other is clock connection. From our understanding, the I2C bus should prompt an alternating probe between the two minions to the master; the I2C bus is constantly looking for new information while each minion is able to simultaneously report to the master. However, currently the connection is not directly made; the connection is actually made through a breadboard centered in the layout.

Issue: Currently, the I2C bus breadboard setup is not properly connecting both minions simultaneously to the master. As such, the team will have to either solve this issue via software, or develop a new physical setup to properly connect information.

Pictured is the actual connection so far. It is incomplete according to the code.

The LEDs

  

We have 30 strips with 30 LEDs each (900 pixels total). There are four wires attached to each LED strip: (from left to right) two are ground, one is data, and the last is power.

Two ground wires are required in order to ground the LEDs by two methods: through a breadboard as well as through a power block. The data is currently also going through a breadboard in order to be connected to the master board. The power is connected to the same power block as the one ground wire.

 

Pictured above is the LED connections to the breadboard. As mentioned already, for each LED, there are two wires going into the breadboard: one ground and one digital.

There are two sizes of power blocks, small (holds two strips) and large (holds six strips), and six power blocks total; from left to right the block are large, small, large, large, small, large. These sizes are chosen somewhat arbitrarily, but the blocks better organize the LEDs while keeping the length of wires in mind. We also have one “mother” power block to channel all of the LED’s power blocks to a power source.

Issue: There are currently not enough power blocks to cover all the LEDs. We will need to gain one more power block in order to cover the last LEDs.

Pictured are the power blocks (Top) and the “mother” power block with its power source connector (Bottom). Currently, the wires connected to the power blocks are all red, which is normally representative of power; but in this case, this is arbitrarily colored and is wiring both power and ground to their respective connections. As an additional note, the capacitors on the power blocks prevent the LEDs from shorting. For now, the amount and strength of the capacitors is a generous guess, but the accurate amount will soon be calculated.

Moving Forward

The team has a few ideas to try out for this semester. We will be trying to match wiring of each power block to individual breadboards and then eventually use permaboard. The permaboard will allow a more permanent system that eventually can become transportable.

Another step towards creating a stable and/or possibly even modifiable system is fixing the the components in place with hot glue or velcro. We can achieve this by laser cutting base pieces to be fastened to the components. In order to better the organization of the system, we will also be breaking up the breadboards, matching the number of the power blocks. As for further developing the LEDs, we need another Mega Arduino or possibly even a Raspberry Pi.

Finally, the team is gaining another member, a computer science major, in order to make progress on the code and connect the LEDs to the entire system!

]]>
https://dhlab.lmc.gatech.edu/floorchart/walking-through-the-project/feed/ 0 648
Membrane ‘Holes’ Layer https://dhlab.lmc.gatech.edu/floorchart/membrane-holes-layer/ https://dhlab.lmc.gatech.edu/floorchart/membrane-holes-layer/#respond Thu, 03 Aug 2017 19:23:46 +0000 http://dhlab.lmc.gatech.edu/?p=566 As described previously the membrane switch controller can be created by separating two sets of conductive traces with flexible insulating material in between:

Membrane keyboard diagram

Membrane keyboard diagram.
Credit: FourOhFour, WikiMedia Commons

Creating this ‘holes’ layer was a seemingly straightforward yet laborious process – cutting 900 holes precisely ain’t easy!

In order to ensure that 30 columns and 30 rows are perfectly exact we used the neoprene cutout stencil to trace and cut squares using an Exacto knife. We also experimented with using a Dremmel cutting attachment, but decided not to use it due to a higher percentage of forced error and lack of safety equipment.

The process loop was as follows:

  1. Align stencil with as many cut squares as possible, leaving the left or rightmost columns on uncut membrane space.
  2. Trace squares lightly through stencil. Remove stencil and very slowly cut with Exacto knife. If having difficulty cutting, replace with new blade.

    Membrane foam with square holes cut in

    Traced squares among cut squares, foam crumbles.

  3. Check alignment and proceed to next row or column.

After many hours of work between Mani and me, we completed all 900 holes! Here is the entire grid, some trimming still required:

One meter by one meter with 30 by 30 grid made of about 1 inch holes

The almost-complete grid: all 900 holes with some trimming required.

]]>
https://dhlab.lmc.gatech.edu/floorchart/membrane-holes-layer/feed/ 0 566
Application of Copper Tape to Neoprene Sheets https://dhlab.lmc.gatech.edu/floorchart/application-of-copper-tape-to-neoprene-sheets/ https://dhlab.lmc.gatech.edu/floorchart/application-of-copper-tape-to-neoprene-sheets/#respond Tue, 04 Apr 2017 21:58:54 +0000 http://dhlab.lmc.gatech.edu/?p=551 The determining factor that will ensure that the touch matrix will work when each of the three separate neoprene layers are put together is the accuracy with which we apply the copper tapes to the outer layers of neoprene. Furthermore, the copper tapes have also been soldered with wires that have been measured to a certain length in order to reach the Arduinos that will be stored on the extra neoprene that will be left over on one side of the 1 meter x 1 meter matrix. Thus, the copper tapes have to be applied to the neoprene in a certain order to ensure that the lengths of the wires match up with the distance they will be from the Arduinos.

The process of applying the copper tape involved the use of very careful measurements with the aid of the laser cut wooden stencil.

By first layering all three neoprene sheets on top of one another, the corners of the matrix were marked.

Then, using the stencil and a straight edge, the outlines of each individual copper tape strip was sharpied onto each of the two outer neoprene layers.

Once outlined, the copper tape strips were cut to length and then applied to the neoprene. The technique used to apply the copper tapes was similar to that of how a screen protector is applied — peeling the tape away revealing the adhesive side of the copper tape bits at a time as it is applied to the neoprene.

Upon completion of the application process, the final step of the process was to use the middle layer of neoprene (the one that will eventually have 900 1 inch x 1 inch square cutouts in it) to ensure that the copper tapes lined up with little to no offset error.

]]>
https://dhlab.lmc.gatech.edu/floorchart/application-of-copper-tape-to-neoprene-sheets/feed/ 0 551
All Things Fabric: Testing, Printing, Pricing https://dhlab.lmc.gatech.edu/floorchart/all-things-fabric-testing-printing-pricing/ https://dhlab.lmc.gatech.edu/floorchart/all-things-fabric-testing-printing-pricing/#respond Thu, 23 Feb 2017 19:24:02 +0000 http://dhlab.lmc.gatech.edu/?p=524 Fabric Testing

As outlined in ‘Face’ of the Quilt, I am testing fabric to see how it diffuses, affects affect, and obscures or reveals hardware. I quickly tested a broad array of swatches. Their textures included felt, mid-weight cotton, a polyester-stretch mix, and lightweight linen. The colors included black, brown, grey, beige, and white. The colors are kept neutral because the background of the quilt will be similar to the original beige.

I was first intrigued by how introductory physics principles became useful: light colored or white fabrics tended to diffuse light while dark colored or black fabrics tended to absorb light and reveal the shape of the LED beneath. I demonstrate this as follows:

This gradient of felt (control for weight and texture) shows how lighter fabrics diffuse light while dark colors absorb them

In regards to the LED hue, there seemed to be less effect on the diffusion because there is a high intensity for all colors. Seen here, we see a slightly different diffusion for blue-toned colors but the diffusion remains mostly consistent:

The LED hue has less effect on diffusion due to high intensity (seen here under brown felt)

If I were to pick a personal favorite, it would be the mid-weight cotton-linen mix. The flecks and uneven texture lent it a natural, handmade feel that could be reminiscent of the 1800s quilt – while the modern cool blue light is the perfect juxtaposed touch:

Cotton-linen mix – a very natural, rough feel – juxtaposed with cool blue modern light

For a full tour of the test, we see below the collective effects of different fabric, where color had a greater effect on diffusion than texture:

 

Fabric Printing

We are investigating printing the Peabody design from an online fabric design service, Spoonflower, instead of sewing the design ourself. The former not only requires less time and money, but may also provide better presentation.

The idea is to design the topmost layer of the quilt in Illustrator exactly to scale and order the design to scale from Spoonflower. The options for the design are as follows:

  1. Orange fill on laser-cut design. This layout borrows dimensions from the stencil used to cut holes for the LEDs, such that the grid aligns exactly with where the LEDs shine through. 
  2.  Orange grid with major axis, and minor gridlines. This design is closer to the original design that Peabody envisioned.
  3.  Orange grid, thin gridlines, less major axis. This is still more similar to the original Peabody design than #1, but reduces emphasis on a divide. Due to the spacing of our LEDs, this is more likely than #2.

Though the online interface for the Peabody quilt allowed for flexibility on the grid design, we are physically constrained by the spacing of the LEDs which do not account for a major axis.

Next, I discovered that the maximum size for Spoonflower designs is about 21″, though you order fabrics in printable areas ranging from 41″ to about 52″. So, we cannot upload our design as-is. Instead, we need to upload 1 corner of the grid and repeat it in a “mirror” fashion:

Spoonflower design interface. Single corner design selected (first row on left). Shown is a mirror pattern that can be used to print a full grid.

 

Fabric Pricing

I performed a Python web scrape and export to CSV using Spoonflower’s product page to create a straightforward list of every type of fabric offered and the exact cost for our quilt (1 meter). Though, accounting for error and extra fabric, these costs may be higher.

Fabric Measure Price per Yard Price for Quilt (1m x 1m)
Basic Cotton Ultra yard $17.50 $19.14
Modern Jersey yard $26.50 $28.98
Cotton Spandex Jersey yard $26.75 $29.25
Fleece yard $27.00 $29.53
Minky yard $27.00 $29.53
Satin yard $18 $19.68
Premium Quilting Weight yd $19 $20.78
Cotton Poplin Ultra yard $20 $21.87
Poly Crepe de Chine yard $23 $25.15
Silky Faille yard $24 $26.25
Performance Knit yard $24 $26.25
Lightweight Cotton Twill yard $26 $28.43
Linen Cotton Canvas Ultra yard $27 $29.53
Organic Cotton Interlock Knit Ultra yard $27 $29.53
Organic Cotton Sateen Ultra yard $27 $29.53
Sport Lycra yard $32 $35.00
Heavy Cotton Twill yard $32 $35.00
Eco Canvas yard $32 $35.00
Faux Suede yard $34 $37.18
Silk Crepe de Chine yard $38 $41.56
]]>
https://dhlab.lmc.gatech.edu/floorchart/all-things-fabric-testing-printing-pricing/feed/ 0 524
The ‘Face’ of the Quilt: Fabric and Light Diffusion https://dhlab.lmc.gatech.edu/floorchart/the-face-of-the-quilt-fabric-and-light-diffusion/ https://dhlab.lmc.gatech.edu/floorchart/the-face-of-the-quilt-fabric-and-light-diffusion/#respond Thu, 19 Jan 2017 20:20:59 +0000 http://dhlab.lmc.gatech.edu/?p=446 Per the admittedly true cliché, you never get a second chance to make a first impression. We are reimagining the original look and feel of the 1800s quilt in modern fabrics and designs. In taking this route, we need to ensure that the materials we select do not necessitate diverging from Peabody’s intellectual contribution and intent. To review, a rendering of the original design is as follows:

originalPeabody

Elizabeth Peabody’s original design

Aspects to consider include:

  1. How well do the NeoPixels perform in varying light conditions? Many example projects photograph NeoPixels in the dark. Most likely, we want users to interact with the quilt in partially dim lighting. If necessary, office lighting
  2. How will we diffuse the light to create defined borders and retain clarity? In Peabody’s work, richly colored square cutouts were placed on top of a grid. Now, we seek to recreate the effect using LED lights
  3. What kind of texture do we want, visually and physically? Should the two mismatch? As a result, how can it create convey a message? (i.e. a soft and smooth texture may convey innocence and warmth and a rough texture may convey stability, trustworthiness, weight)
  4. Do we want the hardware to be felt or obscured? Pragmatism may dictate this result, but the difference may determine whether the user perceives an electronic quilt or a different device

In regards to 1. Light conditions and 2. Hardware… We can examine projects made with NeoPixels from Adafruit. The company that created our LED strips publishes many popular example projects that illuminate (heh) the power of NeoPixels:

leds_glowfur1

NeoPixel (our LEDs) glow fur scarf

projects_becky-stern-cyber-tank-girl

NeoPixel (our LEDs) bandolier costume

In regards to 2. Defined borders and clarity… we can take a cue from recessed LED light fixtures. Some office buildings opt for LED versus fluorescent lighting fixtures. In these cases, they may use panels of LEDs with polystyrene (PS) diffusing plates of varying thickness. A possible implementation could include a single 1 meter by 1 meter sheet of PS plastic with divisions cut between each touch location to allow for pressing. Examples of LED light fixtures include:

led-light-fixture

Recessed LED light fixture with polystyrene (PS) square diffuser plate

led-panel-diffuser

LED panel diffuser with PS plate

In regards to 3. texture… we have not decided between rough and smooth, but a possible implementation may include rough or smooth cotton fabric. Cotton is affordable, sturdy, and offers a wide range of textures. If choosing a rough fabric, it may be more difficult to find a color that is not a shade of light brown, light green, or in natural shades and block more light. Smooth fabrics tend to come in more modern colors and diffuse light more softly. Examples of fabric include:

rough-fabric

An example of a rough fabric; may allow less light to show through

smooth-fabric-white-cotton

An example of a smooth cotton fabric; may allow more light to show through

In regards to 4. hardware salience… in meetings we have discussed how we might create the “LED sandwich.” A possible implementation is to create “troughs” for the LEDs by placing strips of foam in between each strip of LEDs. This way, the surface will feel roughly uniform, and a potential user may believe a single LED panel lies beneath. However, we may encounter issues with this approach because our neoprene is 1 millimeter thinner than our LED strips, as pictured:

led-neoprene-trough

Diagram depicting LED strips with our neoprene foam between; the strips are slightly taller than the foam we purchased

The Peabody Project’s present priority is to complete the physical product. We want to allow for interaction, say in a gallery or exhibition, and sooner than later allow anyone to learn and appreciate Elizabeth Peabody’s contributions to education and data visualization. Combinatorially we already see numerous implementations for our design, each with its own implications for the end product (financial cost, affective experience, etc.). After taking these ideas and experimenting on a small scale, we will be able to determine which path we will take for each.

]]>
https://dhlab.lmc.gatech.edu/floorchart/the-face-of-the-quilt-fabric-and-light-diffusion/feed/ 0 446
Neoprene Square Cutout Stencil https://dhlab.lmc.gatech.edu/floorchart/neoprene-cutout-stencil/ https://dhlab.lmc.gatech.edu/floorchart/neoprene-cutout-stencil/#respond Mon, 12 Dec 2016 18:54:12 +0000 http://dhlab.lmc.gatech.edu/?p=434 One of the aspects of this project is to cut out 900 different 1 inch x 1 inch squares out of a neoprene sheet. The centers of these squares have to be spaced apart by exactly 3.333 cm vertically and horizontally from each other. Now, we could go about drawing out each of these squares on the neoprene itself with a sharpie and then cut them out afterwards with an X-acto knife — but this would’ve been time intensive as well as error-prone.

Thus, we decided to use Adobe Illustrator to generate a stencil that was correctly spaced out. By using the data sheet provided by the manufacturer of the LED strips, we used the values they specified to generate the stencil.

Image of Stencil v1.0 in Adobe Illustrator

Image of Stencil v1.0 in Adobe Illustrator

After having designed the stencil in Illustrator, we used a laser cutter to cut out the stencil out of plywood. Plywood was used because it was cheap and durable enough to keep its shape when we cut out the neoprene squares.

Our first prototype of the stencil turned out to be off by 0.333 cm between each square and led to the accumulation of an offset error between the LEDs. This error could not have been picked up by simply drawing on the neoprene — which could have led to wasted time and materials. Because we could easily fix this offset error in Illustrator, we had another stencil printed out within a couple of hours and didn’t lose any neoprene in the process.

Neoprene Stencil v1.0

Neoprene Stencil v1.0

Our final neoprene cutout stencil with correct dimensions is shown below:

Neoprene Stencil v2.0

Neoprene Stencil v2.0

For our final cutout stencil, we also used a thicker sheet of plywood to ensure the rigidity of our stencil as we cut the squares out of the neoprene.

]]>
https://dhlab.lmc.gatech.edu/floorchart/neoprene-cutout-stencil/feed/ 0 434
LED Wiring https://dhlab.lmc.gatech.edu/floorchart/led-wiring/ https://dhlab.lmc.gatech.edu/floorchart/led-wiring/#respond Fri, 29 Jul 2016 08:47:21 +0000 http://dhlab.lmc.gatech.edu/?p=420 The LED strip has three inputs: +5V, Gnd, and data input. The data input has to come from one end of the strip, indicated by the arrows.

20160715_140926

20160715_140721

The power for the strip can come from any end. The +5V needs to be connected to 5 volts power supply. The Ground needs to be connected to both, the ground from the power supply and to the ground pin in the Arduino mega. The data input needs to be connected to Arduino mega through 470 Ohms resistor.

Also the neopixel guide suggests to put 1000 microfarad capacitor in parallel with power. In this case two 4700 microfarads are being used for 10 led strips.

The resistor has been soldered to the wire and is covered by shrink tubing.20160715_141658

In order to power all 30 strips, terminal strips are used to distribute the power. The first terminal strip is connected to the power supply and is used to split the power to another three terminal strips.20160715_140816 20160715_141149_1

Each of the second terminal strips can host 11 connections. 10 are used by the LED strips and then the extra one can be used to power Arduino mega. On the terminals where the power is connected to the strip, two 4700 microfarad capacitors are placed.

The strip itself is separated into positive and negative parts.

20160712_153718_1

The neopixel guide recommends to use 20 – 60 mAmps per pixel. Since there are 900 pixels the total current should be 18 – 54 Amps. But I found that using one 10 Amps power supply is more than sufficient to power all the pixels at about 1/3 brightness. But incase that is not enough a second power supply can be connected to the main terminal strip.

]]>
https://dhlab.lmc.gatech.edu/floorchart/led-wiring/feed/ 0 420
Taking Membrane Switches to Scale: First Steps https://dhlab.lmc.gatech.edu/floorchart/taking-membrane-switches-to-scale-first-steps/ https://dhlab.lmc.gatech.edu/floorchart/taking-membrane-switches-to-scale-first-steps/#respond Tue, 26 Jul 2016 03:14:38 +0000 http://dhlab.lmc.gatech.edu/?p=390 Recently my lab mate and I were fantasizing about theoryland. It’s this magical place where you go up to a whiteboard and, using your extensive domain knowledge, allow ephemeral non-toxic marker to create your dream system. But no Expo could predict the real world application’s truth: Murphy’s law. Bringing any system to life is not easy, but it’s worth it.

To take a simple keypad prototype to scale, I first followed Dr. Klein’s advice and created a simple “map” of the membrane switch layout at scale:

at-scale_rows+cols

Row and column ‘map’ for conductive traces at scale, 1×1 meters

I experimented with multiple materials for the traces:

IMG_5423

1. Single core copper wire (to be partially sheathed at ‘touch points’ where a row and column intersect)

IMG_5420

2. Slim (0.5″) copper tape with conductive adhesive

IMG_5421

3. Wide (1″) copper tape with conductive adhesive

IMG_5424

4. Slim (0.25″) conductive fabric tape

I laid them out on a 1/4” thick neoprene mat, meant to eventually serve as the ‘insulating separator’ between the two conductive trace layers.

IMG_5458

Top to bottom: 1″ copper tape, single core wire, conductive fabric tape, 0.5″ copper tape on neoprene

After testing, the 1″ copper tape won: it provides a wide conductive surface area, sticks with conductive adhesive (can be used to attach wires on the bottom), is relatively durable with repeated use, and (very importantly) stays economical at 60 meters.

The one downside is copper tape’s brittleness: it performs best on hard, flat surfaces and most likely wont allow for a fully soft, flexible quilt when picked up. Thankfully, we interact with the quilt on a flat surface. The touch mechanism can seemingly become ‘part of the table,’ allowing for a soft material on top to become ‘the quilt.’

I’ve begun putting copper tape on the printed grid:

IMG_5467

Initial layout on grid – testing copper tape (seen as columns from this angle) and fabric tape (seen as rows, left to right)

So far, the copper tape has not been working very well. I have prototyped with alligator clips, male-male wires (directly on top of the copper tape), and male-male wires in series with resistors. None have worked quite like the silver traces on plastic membranes with male-male wire. I will continue debugging and conduct more research into how to construct a working copper trace membrane keypad, such as a large version of this example.

]]>
https://dhlab.lmc.gatech.edu/floorchart/taking-membrane-switches-to-scale-first-steps/feed/ 0 390
Prototyping Membrane Switches https://dhlab.lmc.gatech.edu/floorchart/prototyping-membrane-switches/ https://dhlab.lmc.gatech.edu/floorchart/prototyping-membrane-switches/#respond Tue, 19 Jul 2016 01:23:38 +0000 http://dhlab.lmc.gatech.edu/?p=348 Membrane switches are very simple. Physically, there are no “fancy” materials involved. It’s easy to see why they have been on our microwaves allowing us to accidentally burn PopSecret for many years and counting.

With only the following materials I was able to make a very simple keypad controller:
– silver conductive ink
– plastic sheet
– felt
– electrical wire
– Arduino

I will be first to offer that this is a rough prototype – I simply wanted to prove the concept.

Luckily I was not first to desire a homemade membrane switch. I found a membrane switch Instructable which despite its informal photography and formatting was as useful and straight forward as they come. As shown, making a keypad is very easy!

Prototyping Process

I printed out the following from the Instructable to use as stencils:

rows-and-columns

Conductive trace stencil rows and columns, with the slim end being the side that inputs to the Arduino. Source: Instructables, User TheBestJohn

In order to create the following keypad:

keypad

Keypad layout

I laid the plastic sheets on top of these stencils and drew traces in silver conductive ink:

IMG_5167

Stencils with plastic sheets on top, drawn on with silver conductive ink

Important note: this is not the intended application of silver ink pens. They are used for small repairs. However, I wanted to simulate silver ink printed on a membrane per the manufacturing process. It quickly became apparent as I squeezed the pen with nontrivial effort that either I need to work my grip strength in the gym, or the pen is not designed to be squeezed and drawn for long periods of time.

To feed electricity to these traces, on each I attached a male wire using electrical tape and subsequently fed the other end into the Arduino Mega.

IMG_5175

Attaching wire to conductive trace with electrical tape.

IMG_5176

Attaching wire to conductive trace with electrical tape.

Now the conductive trace rows and columns must be separated. I didn’t have the proper “squishy” insulating material on hand and used white felt instead. It was a little too stiff, but worked. This created the final product:

IMG_5181

House-made, locally crafted membrane switch: the final product

IMG_5182

Taking a look “under the hood.” I plead that you refrain from passing judgement on my egregiously uneven felt holes

I tested the keypad with the provided Arduino code (link to download from the author) and it worked perfectly:

membrane-456

Testing, testing: numbers “4, 5, 6” on keypad

serial-456

Arduino’s serial output, exactly as intended: “4, 5, 6”

I created a cover for the keypad as well, but the felt was too stiff to allow a working control mechanism through both felt and paper.

Final Thoughts

This design working so well simply brightened my day. No need for capacitive touch’s serious grounding and no problem. Next, we need conductive material that can stay affordable at 60 meters (60 rows, columns total) and work well across conducive material (1 meter for each row, column).

Implementing touch sensing and light-up feedback with microcontrollers to create Elizabeth Peadbody’s quilt is fitting to her way-ahead-of-its-time belief that play was intrinsic to learning. Our Neo-Victorian quilt is feeling very real, and very exciting.

]]>
https://dhlab.lmc.gatech.edu/floorchart/prototyping-membrane-switches/feed/ 0 348