Interactive comics

In this interactive comics my teammate(Peter) and I create a story that tells a story about a rabbit named Oreo. He lost his best friend– his shadow — on the way of finding him, he met a tree, a building, and they gave him different answers towards friendship. Finally, he found another rabbit and became good friends.

Link: http://192.168.50.184/~wh915/interactive_comics

I’m responsible for all the background images, photoshop, gif making, and write the code of the first half of story. In the code part, I used several different libraries to achieve different effect: BufferLoader.js for playing sound, scrollRevel.js for the scrolling effect, reset.css, Bootstrap for building framework, and AniClollection for animation.

screen-shot-2016-10-10-at-9-44-02-am

In the scrollReveal, the panel will fade out in you scroll up to the point of one half of the panel.

screen-shot-2016-10-10-at-9-57-55-am

 

I also used animation to let Oreo move and the balloon fly, code as:

// scrollreveal
window.sr = ScrollReveal({
duration: 2000,
reset: true
});

sr.reveal(‘#scene-0’, {
viewFactor: 0.5
});

/*
Scene 1
*/
sr.reveal(‘#scene-1’, {
viewFactor: 0.5
});

sr.reveal(‘#scene-1-1’, {
viewFactor: 0.5,
afterReveal: function() {
var bgWidth = $(‘#background-1-5’).width()

$(“#balloon”)
.css({
top: “100px”,
left: “100px”,
width: “600px”,
height: “650px”,
})
.animate({
top: “-700px”,
left: “100px”,
width: “600px”,
height: “650px”

}, 2500, function() {
console.log(‘balloon flies’)
})
},
beforeReset: function() {
$(‘#balloon’)
.css({
top: “100px”,
left: “100px”,
width: “600px”,
height: “650px”,
})
}

});

Using Reset so that anytime you go back to the scene with animation, the animation can be reset and make the move again.

Using Bootstrap to build a basic framework of panel allows our page can show two or more different images in the same page and look neat, like this:

screen-shot-2016-10-10-at-10-07-19-am

to understand how does Bootstrap panel works, I found this image on Google:

figure1

the container is divided by 12 columns of panels, if you want the image to show in the whole page, you can write “col-md-12”, if you want 2 images on the page, just divide 12 by 2 and write “col-md-6”. For example:

screen-shot-2016-10-10-at-10-13-26-am

We also have a rain animation, referring to the rain code in: https://codepen.io/ruigewaard/pen/JHDdF/, and sound from:https://www.html5rocks.com/en/tutorials/webaudio/intro/

screen-shot-2016-10-10-at-10-14-32-am

To make the comics more interactive, we add conversation and narrative to the story, and the text will show if you click the picture. And we also use typed.min.js library to make the text have the effect as if it is typed by a person after you click the picture.

CM Final Project Documentation

Lessons Learned:

  1. If completion is your goal, change directions early on in the project. Even if you’re making baby steps with an error-ridden code/stubborn website, move on. Replicate and modify a tried-and-true code.
  2. Don’t try to follow the code of the smartest kid on your class. Just don’t.
  3. Don’t commit to something just for the sake of the project. Whatever you choose should be at least mildly interesting.
  4. Avoid Instagram like the plague when doing data scraping (it gets updated to frequently to avoid being scraped, so example codes on the web often won’t work.

The Original Plan

I’ll probably be burned on a stake for saying this, but I hate museums. There is nothing more unappealing to me than spending an afternoon looking at old paintings by dead white guys (however, if the paintings are by dead women, I might perk up juuuust a bit). Perhaps I’m too low-class for culture. Because I am an intelligent human being, I decided to base my project on the object of my hatred (because there really isn’t much else to do in Paris besides go to museums anyway). My plan was to capture other peoples’ museum experiences by scraping the web for images and text (i.e. tweets and/or Instagram photos posted by museum goers) and somehow present it. Early iterations of the plan included projection, and the idea of VR was tossed into the mix

Problem number 1

(The first problem was actually not remembering anything from the museums I’d been to, but i remedied that by finding the app from a Centre Pompidou and picking a few pieces from there. Thanks Matt and Marianne for the suggestion that saved me from having to stand in the monstrous Pompidou line.)

Museum goers are just as boring as the museums themselves. Because my roommate and I have a habit of posing with paintings/sculptures and/or giving them new captions, I assumed there would be other people on the internet doing the same. I was horribly mistaken. The only captions I found were along the lines of “[x] painting by [y] artists at [z] museum. *yawn* I decided to reboot: I’d choose a bunch of paintings by a specific artist and surround his works with a bunch of related text from social media in a VR environment (it is worth noting here that at this point I’d never worked with VR before. I’ve never even experienced VR. Occulus Rift might as well be the tooth fairy).

Problem number 2

I tried to use Richard and Nicole’s code as examples. The server that hosted Nicole’s code had expired, and she was only able to retrieve partial code from her email. I tried to tweak that, to no avail. Richard’s code was way over my head. After discussing it with him and spending many hours trying to pick it apart, I began to understand some things, but I felt like a Mario Kart noob trying to take on Rainbow Road.

Problem 3

In my desperation I kept making stupid mistakes, like forgetting to import Beautiful Soup when I ran the code. Because windows users seem to be the minorty at NYU/SH/NY/AD, and because Windows- friendly documentation is practically impossible to find, I’ll post a reminder: MAKE SURE YOU’RE ACTUALLY FOLLOWING WINDOWS DOCUMENTATION, AND NOT MAC DOCUMENTATION THAT IS MASQUERADING AS WINDOWS-FRIENDLY.

Problem 4

This is an important yet underrated point: take care of yourselves. Illness, undernourishment, and exhaustion got the best of me. I quite literally crashed at my computer one night and woke up barely in time for class, with nothing to present but a broken soul code.

—–

Nuts and Bolts

I used A-Frame to attempt to create the VR, and to scrape instagram I attempted to use selenium and Beautiful Soup.

This code pretty much just opens Instagram.

Screenshot (59)

this code creates my floating boxes in A-Frame.

Screenshot (60)

These are my floating boxes

Screenshot (61)

—-

Research

A-Frame

A-Frame Github

Another helpful A-frame documentation

Selenium Documentation

Selenium with Python

 

 

 

 

 

 

The Vietnamese Diaspora || Kevin Pham’s Final Project


 

So all my content is up there for your perusal. This is just going to be writing about the whole thing. So this project came together kinda late because I was struggling intensely on what I wanted to do. But I knew the general idea was this:

I wanted to do this project because I wanted to take advantage of my language ability to make something that would simultaneously accomplish an assignment for class as well as bring personal gain to myself.

Seeing as the Vietnamese population in Prague is quite large, I wanted to explore the Vietnamese Diaspora (the movement of people from their original homeland) and ask people about their journeys to Prague.

From what I picked up from my interviews, the Vietnamese people lament that there is some hostility and animosity held towards them for various reasons, but for the most part, the Czech people have accepted the Vietnamese as a part of the national identity.

It is clear that their presence is very well ingrained in the city, evident by the number of Vietnamese/Fusion restaurants and the mini markets. You can essentially count on the shop keeper being Vietnamese when you walk in.

But what I originally had in mind did not come to fruition because it would prove to be way too difficult to get Vietnamese people to speak on camera. That was mighty ambitious of me.

So I changed gears and thought about mapping and what it meant. And I realized I could use Fulcrum to walk around and get locations of Viet shops/restaurants and I could have a data visualization of the shops that are around. That also got me thinking because I realized that it had relevance to the ongoing refugee crisis. Not that this project was built for that (mostly because I was not sure how to expand more on that level of thinking), but rather it is intended to be a small reminder of the similarities people have gone through and the importance of empathizing with others. This is what happened next with the project:

With all that being said, this project was about asking people where they came from in Vietnam and mapping out where they are today as well as their origins. I wanted a map visualization of both, especially for the Prague because I wanted to show how much of an impact the Vietnamese community has on a city.

And in a country where polls on refugees are overwhelmingly negative despite the relatively low number of refugees in the country, I would urge for people, especially in the Czech Republic, to remember that there was once a point in which people did not like the idea of migrants coming in. But just like the Vietnamese, migrants to a new country are very much capable of contributing to society.


 

So moving forward, my project would encompass of three parts: 1) Video 2) Interviews 3) Mapping/Data Visualization. This is the process for each:

  1. Unfortunately, the video was not able to be produced. I wanted an interactive video that would give users a bit of background on the Viet people in the Czech Republic. But problem after problem came up. First my laptop battery fried, so I had no access to my editing programs. Then after I filmed the video and prepared to edit, Adobe Premiere Pro kept freezing and crashing on me with the school computer I was using. All of them kept doing that, so eventually I had to give up on trying to complete it because it was hindering my process on other things. If I could do this again, I would’ve liked to develop a video.
  2. For the interviews, I did two types. A simple and extended. In the simple interviews, I asked four basic questions that would get me info I needed about the person and their connection to the Czech Republic and Vietnam. For the extended interviews, I asked deeper questions and gauged what they thought their life was like in the Czech Republic, their journey here and how they felt like the Vietnamese community fit in within Prague. That part formed the basis of my mapping journey which I will elaborate on in a bit. The simple interviews served as data collection so I could visually see the Viet shops in the city. Interviews were fun because I got to learn a lot about Vietnamese people and really understand what they had to go through and compare how their lives are different from my own and that of my parents. Also I got to practice speaking a lot of Vietnamese. Interviews were also crappy because there were a ton of people that were either really snappy and tense or just skeptical of me, which I understand. I stayed patient, but I got real tired of people continually asking me why I need to know their name after I explained multiple times.
  3. This is the main crux of my project. I had the data visualization with CartoDB in which I mapped where the shops were (with help of imported data from Fulcrum). I also used a thing called Storyful in order to show the journey of my interview subjects and it basically followed a world map around to show everything. This took a while to do with organizing all the data and stuff. But I think it is a vital piece to my project and I am proud of the way it turned out. I like the way that it looks too!

All in all, I think there were a lot of things about this project that I could’ve done a lot better with, but at the same time, I refuse to knock down my own work because I believe that I worked hard on what I had. And I am overall just happy with the project because I met so many people and had great conversations (some longer than necessary) and it was a fulfilling experience for me and I see it as a fitting way to end my experience in Prague.

ZZ’s CM Final Project: SECRETS IN THE DARK | Week 13 & Week 14

https://github.com/zzhangnahzz/SECRETSINTHEDARK.git

This project SECRETS IN THE DARK has two components–participants’ experience in the darkroom with me and a website that showcases all the anonymous secrets I collected in the darkroom. I created a Facebook event and invited people who I am sort of close with. Many of them showed up. The experience in the darkroom involves many collaborations as follows:

I first introduce this project to the participants like what I am doing now. I ask them to share a secret with me and in return, I will share a secret of mine with them.

At the same time, I ask them to come up with an adjective that they think best describes this secret or how they personally feel about this project. I will turn the red light (printing photo purpose) back on once we have shared our secrets and they have an adjective in mind.

Under the dim red light, I ask them to write down their adjectives on a sheet of glass within a pre-measured and pre-marked area. I make sure they are aware that they can do anything with this area (draw, write, etc) as long as the adjective is visible. Once the word is written, I will place a piece of light-sensative matt photo paper under that area.

Once the paper is placed correctly, I ask participants to turn on the enlarger to project whatever they have just written on the glass onto the light-sensaive paper. (The exposure time, 5 seconds, and contrast, level of 4, of the englarger has been adjsuted by me according to those test strips I have mede before the participants come into the darkroom.) Then, I immerse the photo paper into photo paper developer for 60 to 90 seconds, followed by fix for 20 to 30 seconds, then stop for 5 to 10 minutes. Once the photo paper is in the tray of fix, the normal light can be turned on, and the participant may leave.

making-1
making-5
making-2
Then I put the photo paper into running water for 1 minute and then use a squeegee to drain the photo paper. The paper then needs to be hung up and wait till it dries up completely, and I will scan those dry photo paper and post them here with those secrets I collected in the dark.
making-3
I was using tumblr for showcasing those secrets. http://secretsinthedarkroom.tumblr.com. But giving the fact that it is not a highly modifiable environment, I followed Matt and Marianne’s suggestion of making my own website. I found a template of a basic grid system, and then I deleted everything else except for the grid. I then added the hovering over effect with texts showing up,  links, layouts and an about page in addition to adjusting those photo scans. Still waiting for instructions on how to publish my webpage. Here are some photos for preview:
Screen Shot 2016-05-11 at 10.36.42 AM
Screen Shot 2016-05-11 at 10.36.50 AM
Screen Shot 2016-05-11 at 10.36.46 AM
Screen Shot 2016-05-11 at 10.37.01 AM
Screen Shot 2016-05-11 at 10.37.03 AM

Max Bork Final Project: Comfort Zones

My final project consisted of an interactive installation that took place over the course of several days in the NYU Berlin dorms and academic center. But it also involves a blog that I created to allow those who were not physically present for the installation to experience how the project went.

http://imacomfortzones.blogspot.de/

http://imacomfortzones.blogspot.de/

http://imacomfortzones.blogspot.de/

The purpose of the installation was to gather people’s thoughts and experiences regarding the idea of the comfort zone. The project centered around these questions:

1: What drives you to go outside of your comfort zone, or stay within it?

2: What do you feel once you have left your comfort zone or made a choice to stay within?

  1. Why did you choose this side?

The way the installation worked was at its core to get people to answer these questions by writing on small sheets of paper and leaving them for me to collect. But the point of the project was for it to simulate an actual experience of making a choice: to leave the comfort zone or to stay within it. The purpose of this was to stimulate people into having more thoughtful answers and to provoke people into a meaningful dialogue about their own comfort zone. I also found that sometimes people ended up showing a lot more by which side they chose than what they wrote on the paper.

IMG_4958

As a participant walked up to the installation, they saw two tables, each cordoned off with tape on the ground. There was also a set of directions to tell people how it was supposed to work (ie only pick one side and take one candy etc) On one table was a clear bowl of candy, along with the response sheets. On the other table was an opaque bowl full of small folded sheets of paper, along with its own response sheets. On the small folded sheets were pictures of assorted objects that symbolized either failure or defeat. The idea was to create a scenario where one side synthesized a feeling of some degree of risk (will the folded papers be good or bad? Interesting or not?) and the other side provided complete certainty about what would happen for the participant, even though that certainty was only a piece of candy.

IMG_4872

I put up the installation twice, but most responses came from the installation that was up for one day’s time in the Academic Center in a student lounge area. Once I had my results I displayed them publicly in the Academic Center on a four sided concrete pillar. In the center of the pillar was a sheet explaining the project. On the right were the responses from people who chose to stay within their comfort zones and on the left were those from the people who chose to leave. I arrayed the responses on small sheets of paper in the form of the rungs of a ladder going up the pillar, with a piece of verse at the top that I thought went along with the piece thematically.

IMG_4975

As for the results, I got significantly more on the outside the comfort zone side. As I mention in the blog, I felt like people thought that this was somehow the right answer, and that if they didn’t choose it they had to justify themselves that they weren’t wrong, or deny that they cared  saying they just wanted some candy. But in my view, just wanting some candy was a perfectly valid response to the project. Isn’t that why we stay within our comfort zones anyway? We have the ‘candy’ right there, and sometimes we don’t even want anything else.

If I could have done anything differently, I would have made it harder to see what was in the uncertainty bowl. I also would have advertised even more than I already did and made the directions even clearer. I found that the more I advertised and the more clear I was with instructions (even if they seemed blatantly obvious to me since I’d been working on it for hours and hours) the better it went. Some people still didn’t really get how to do it correctly, which I take to mean that I could have made it easier for them. But In my opinion the results were very thoughtful and the project certainly succeeded in starting a dialogue at NYU Berlin. It also got me thinking about my own comfort zones, and why I stay in them.

 

Week 14 & 15: Beyond Social Media — Bingling Du

A project site is created: https://beyondmosaic.wordpress.com

Documentation of project overview and results can be found on the above link.

——————————————————————————————————————————

  1. Research insights and some thoughts:
    The idea of doing this research instead of simply collecting photos is inspired by Matt.
    It’s really great to hear people’s answer to these questions.
    An interesting phenomenon is the answer of “I don’t know”, and these people’s answer to the third question. Before doing the project, I was thinking of the similarity of these unuploaded photos. However, I began to think of the opposite side of the question. It seems that my project started at a false hypothesis–there must be some reason for people to not upload photos. People’s response are driving me to think more about this hypothesis, and realize that uploading photos on social media isn’t the reason that our mobile phones, tablets and computers have cameras. From a moment we seems to take it for granted that we take photos to upload and share with others, but that may be a large tragedy brought by the development of social network and the explosion of information.
    I really appreciate those people who would like to share their opinions with me, and it brought a lot of thoughts to me.

    Just like they said, there should not be a reason to take a photo in the beginning.

  2. Codes used to generate the photo mosaic:

    Reference: codebox/mosaic                        https://github.com/codebox/mosaic
    danielballan/photomosaic        https://github.com/danielballan/photomosaic
    john2x/photomosaic                 https://github.com/john2x/photomosaic

    I took deep look on danielballan and john2x’s codes, but eventually build the code based on codebox’s mosaic project. However, all of the three projects provided me with great example of how Python Image Library works and what kind of amazing things I can do with them. I chose codebox because in the model it provides more flexibility for user to adjust the resolution and size of the photo, and it doesn’t set limitation on the number of original files needed for the mosaic. The code works on Python 2.7. I tried to transform it into 3.5 environment, but the Pillow Image Library experienced a large update through that and most of the codes cannot work on 3.5, and the key function of arranging tiles in the generating process completely crushed. So I simplified the process of arranging tiles, adjust the functions used to compare the difference between image and tiles to get easier matches (which resulted in a lot of white images matching light color images), and delete the work progress counter and the double confirmation before each step.

    Code for Mosaic generate:
    屏幕快照 2016-05-11 上午1.28.55屏幕快照 2016-05-11 上午1.29.04
    Code for comparing the difference between original image and the tiles, and try to find the best fit.屏幕快照 2016-05-11 上午1.27.03
    Code for getting the original image.
    屏幕快照 2016-05-11 上午1.25.03
    Code for creating image pool with material images.屏幕快照 2016-05-11 上午1.24.12
    Code for general operation.
    屏幕快照 2016-05-11 上午1.24.01

  3. Creating project portfolio using WordPress
    The website for my project is created with WordPress.
  4. Thoughts on the result of Mosaic
    I was struggling to present the photos on the blog. Most of them are too big because they have extremely high resolution (Partly because the original file has high resolution, and in order to create detail mosaics and clear mosaic pieces, I had to make the file really big).
    An interesting thing is that the mosaic looks more pale than the original file. There are several reasons I guess:
    1. the code I use pays more attention to the thickness of color instead of hue. Since the code is analyzing the difference by analyzing RGB information,
    2. The raw material I gathered doesn’t contain enough colors, or they are too colorful to be matched with the original file.
    3. My favorite one: no one can keep life colorful for forever, but we tend to share the most colorful parts on social media, and leave the most pale, simple, boring, and peaceful peace to ourselves.

Weeks: Final Project Documentation

The basis of my project is pretty much outlined in my previous posts and on the website itself. I created a website which documented five of my bike stories in Berlin alongside the Instagram posts of other people in the same place. I chose to focus on bike stories because a) even if it wasn’t explicit, most of the stories would have been about my bike anyway b) I’m on my bike all the time.

You can find the files here. Download the zip folder to view locally.

1.Html/Css/Js

This part was pretty straightforward. I started with the navigation, and then moved onto the individual pages. I worked first with getting all the text together, then getting all the images together. Sarabi mentioned last week that she liked the homepage even without an image, and I thought about it a little more, and decided to move the navigation to the middle, and just add a bike image. The site itself is pretty simple, I focused more on trying to get the information across.

I had some problems with the css, and I could not figure out how to get rid of this invisible margin that was showing up. After falling into a google spiral, I was still not able to solve the problem, but learned a lot of other things in the meantime. But I let it sit for a day, and then the next day it took me a lot less time to figure out what was wrong. I also skyped Matt for help with other css stuff.

I was looking at the website of a photographer, and I really liked the way his sidebar collapsed and opened, so I dug around his site a bit with chrome developer tools, and found the bit of javascript/jQuery that dealt with this effect. I did not understand it. It was so confusing, and the website was coded in 2011, so I thought that there had to be a more efficient way to achieve the same effect, and I ended up following their logic, but using really easy built in jQuery/jQuery UI functions to achieve the menu collapse.

屏幕快照 2016-05-11 上午1.13.16

 

The “one two three..” are in a div which is not displayed, but switches to another class once “story” is clicked. This took me a while to figure out, but I’m pretty happy with the end effect.

 

2. Instagram

This was more tricky. First, I wanted to simply use the Instagram embed to curate a couple of images from a certain location, but that did not work out too well. The code was really meant more for embedding one instagram photo instead of a bunch. I then looked toward instagram widgets, which were easy to use, and could gather photos by hashtags. But I wanted to gather photos by their location id, which one widget could do, but the service was being phased out, so I put the widgets aside, and searched for something else.

Next, I tried to work with instafeed.js (instafeedjs.com). The concept seems pretty simple, and I understood the example codes, but I could not get the instagram photos to show up, not even with the simplest examples. After a while, I got frustrated, and gave up on this for this project, but I’m hoping to keep working on it, and figuring out how to get it to work for possible future projects. It seems like a neat tool.

So I reverted back to the widgets/hashtag method, with a site called uptsi. This was easy to use, and easy to implement across the pages, so for the moment this is the method applied on the site. One thing I haven’t decided yet is how much control I want over the images displayed on the pages. I don’t know if I want to take out photos I don’t like, or to always have it be the first 6 photos in the tag/location…

Some screenshots for you:
屏幕快照 2016-05-11 上午1.32.41 屏幕快照 2016-05-11 上午1.32.45 屏幕快照 2016-05-11 上午1.32.54

 

 

 

屏幕快照 2016-05-11 下午12.49.55

[Week 14 & 15] Final Project: AWWOLS

The Project

A Week’s Worth of Life Stories



The Reflection

This project took me an incredibly long time to complete. I’m fairly satisfied, though, I think it turned out well. My original idea was to create some kind of clock showing turning points in peoples lives because I’m really interested in personal stories; however, I felt that the clock idea was too complicated, though I wanted to stick with the general idea. When we were given the assignment to try data mining social media websites, I really wanted to be able to scrape Facebook to see if I could get my friends ‘life events’. While Facebook does display these when you type ‘life events’ into the search bar and I’m sure there must have been a way to scrape them, the problem was that Facebook SDK wasn’t working (I tried valiantly to figure the issue out, but to no avail), so I had to give that up.

My next goal was to use Twitter, but now the issue was, what was I going to do? I tested scraping many different things, from the hashtag #finalsweek (seeing as that’s on everybody’s minds) to babies (seeing as a lot of people our age seem to have babies on their minds… E.g. my roommate dreamed about having triplets, Megan Graham actually had her twins, Sarabi loves babies, and so on). It took what seemed like forever, but then I remembered what I was going to try scraping from Facebook: life events. So I tried to scrape that term, then changed it to life stories, and then finally settled on the hashtag #storyofmylife. For visual representation, I decided I would use Matt’s suggestion of Knight Lab’s TimelineJS to create a timeline with the tweets, as well as CartoDB to create a map of them.

I again modified my code so that it would scrape that hashtag, as shown below. And soon, once I finally set about sorting through all of the tweets I’d scraped, I realized I had too. much. data. I couldn’t go through that many tweets – there were far too many. My code was set to scrape 1,500 tweets, because I wanted a timeline that went through a week. Forgetting that this was obviously too many to go through. The thing is… I did go through them all. So I went a little overboard. But I worked with it! Below is a description of how my process went.

Step 1: Scraping Data From Twitter
Because I was making so many changes and modifying my code depending on what kind of scrape I thought I’d test out, I exceeded my rate limit several times and got the error 429 from the Twitter API because I was making too many requests too fast. Life is hard. It took me a long time to get my code together despite it looking so straightforward, because I tried many different things before settling on the final topic of my scrapes/stories/project, so all of those iterations added up quickly and I guess Twitter didn’t appreciate me spamming requests at such a rate. Whatever, man.

Here’s my code:
Screen Shot 2016-05-09 at 7.56.08 PM
Reasoning for the things I scraped:

  • [Date] Needed the dates and times for putting tweets on the timeline. No problems acquiring these.
  • [Location] Needed for putting tweets on the map. But not everybody had a location, so I couldn’t use all of the tweets I scraped.
  • [Location, geo-tagging] Didn’t get much out of this. Not a lot of people have this option turned on.
  • [Time zones] Helpful, because if I only received a time zone like ‘Eastern Time, US & Canada’, I couldn’t put that on the map unless the time zone was more specific (I got a tweet that said the person’s location was in the ‘UK, for now’, but their timezone stated ‘London’ which was more useful).
  • [Names and screen names] To give some kind of credit to the people who’s tweets I used.

And here’s what my sorting process kind of looked like (showing all tweets, map selections, and timeline selections, from top to bottom):

Screen Shot 2016-05-09 at 7.38.49 PM

Screen Shot 2016-05-09 at 7.39.11 PM

Screen Shot 2016-05-09 at 7.39.35 PM

Step 2: Sorting through the tweets I acquired

I had to sort through all of the data I collected and select the tweets I was going to use. For example, I got tweets with the locations ‘hillbilly hell’ and ‘coleworld’, which, amusing as they were, weren’t feasible to put on the map, nor the tweets that had no location specified (which was sad if they were perfectly good tweets). I had to go through all the tweets to make sure things came out correctly, because, for example, ampersands tend to show up like: & And although I set the language to English, I got a location that was written in Urdu for someone in Quetta, Pakistan, or somebody’s name in Korean. Sometimes I had to choose between a location or a time zone if I didn’t get the other one to confirm it, or got both that were different. I ended up filtering out a lot of tweets because of all this. I also removed tweets that were about the song Story of My Life by One Direction; not what I was looking for.

While I did this, I realized I had scraped way too much. It wasn’t really possible for me to go through all of them, so because I’d already started sorting through tweets, I thought I’d use a number of those and go with that. This was my first, silly, idea. But then it occurred to me that I could just go through the days and select, say, 5 tweets per day, making a total of 35 each for the map and timeline – I didn’t use the same tweets; for the timeline, I chose those that didn’t provide a location. So, I decided to choose tweets that I found interesting; trying to get a variation of places for the map, a variation of times throughout the day for the timeline. You would think this was easy. But, no. That sentence about it not being possible for me to go through 1,500 tweets? I take it back. I did that. For hours and hours on end. Just to make sure I was choosing a good 5 out of 200+ per day of those 7 days, average. Random fact: Going through 1,500 tweets for hours and hours on end makes you dizzy.

Step 3: Creating a map using Carto

After I’d finished selecting tweets (finally. I should have cried with happiness), I went about adding the 35 chosen for the map to a CartoDB map. I have some prior experience with it, but I watched the video tutorial anyway, and the process was fairly straightforward. CartoDB supports Twitter, but I wasn’t sure how to go about using that function because I think I would have needed to scrape data through Carto for this, and I was just not going to do that. So, I decided to add the points manually; no real problems here, it was just a bit time consuming because I was extremely tired (having been working all day). I just really wanted to be able to finish this project because I have other finals to focus on, too.

For some reason my map isn’t embedding, so here’s a link to see it by itself.
Screen Shot 2016-05-09 at 8.05.51 PM

Step 4: Creating a timeline using TimelineJS

I watched the video tutorial for this as well, because I’d never used TimelineJS before. It wasn’t so bad; again, just a bit time consuming. Because TimelineJS also allows for Twitter content to be retrieved and I had the names and screen names of the users, I found their tweets on Twitter and decided to link them to the timeline and have them displayed that way. I customized the timeline some to make it less boring, and that’s about it, I guess.

And here’s a link to see my timeline, because that won’t embed here, as explained in step 5.
Screen Shot 2016-05-09 at 8.06.27 PM

Step 5: Putting it together on a website

I went through CSS Zen Garden as Marianne suggested, but I didn’t really find a style that attracted me. I then decided instead to use WordPress; so, I created a website and then customized/edited it a fair bit because there were a lot of extraneous things that I didn’t need. Essentially, I wanted a simple website to put together the visualizations I’d created. This took a fair bit of time as well. I didn’t have a problem embedding my map into my website, but things were not so smooth for the timeline. According to TimelineJS’s website, it doesn’t work with WordPress; I looked things up, and there is a plugin – but that doesn’t work for WordPress.com websites. So I had to just leave a link on my website instead.

Overall, my project isn’t perfect, but I think it turned out well enough. It was a long journey, I guess, but also kind of fun. I think there’s potential to make a larger scale version of my project, which would be cool. But for now, this is all I can do.

Week 13: Social Media Assignment

Well this really really sucks. My laptop battery fried and now I do not have a working laptop. This happened in the midst of my assignment so that was really aggravating to deal with and I had to switch over to use ZZ’s laptop. I am incredibly unfamiliar with Macs, so I did not quite understand how to use Terminal and Python. Unfortunately that meant that I was not able to explore with this assignment and understand more because I only did the bare minimum because I genuinely was so confused with the operating system. It also didn’t help because I was extremely frustrated and saddened with my laptop. So I do not feel like I understood this assignment to the fullest extent. Everything else from here on out will have to be done on Macs, so I will have to figure that out soon.

My first one was just getting tweets from my friend’s twitter:

import tweepy

consumer_key = ''
consumer_secret = ''
access_token = ''
access_token_secret = ''

auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)


username = 'emma_is_lit'

tweets = api.user_timeline(username)
for tweet in tweets:
    print(tweet.text)

My second one was to scrap tweets that had to do with the LA Clippers:

import tweepy


consumer_key = ''
consumer_secret = ''
access_token = ''
access_token_secret = ''

auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)


search_term = 'clippers'

for tweet in tweepy.Cursor(api.search, q=(search_term)).items(10):
    print('Name:', tweet.author.name)
    print('Screen name:', tweet.author.screen_name)
    print('Date tweeted:', tweet.created_at)
    print('Tweet content:', tweet.text)
    print('Hashtags:', tweet.entities.get('hashtags'))

ZZ’s Tweepy & Beygency | Week 12

Example 4: Get Trends for a Location didn’t work for me. I was trying to get the trends in Prague by changing the woeid to 796597 and remaining the rest of the code exactly the same, and this is what I got. 🙁

Screen Shot 2016-04-27 at 1.54.27 AM

I started with tweeting with Tweepy. Pretty straightforward.

Screen Shot 2016-04-27 at 1.48.26 AM

Then I started with searching “Lemonade” aka the name of Beyoncé’s brand-new album, and collecting the author name of the latest tweet about ‘Lemonade’ as well as the author’s most recent 20 tweets and the number of the author’s followers and followings. It went pretty smoothly.

Screen Shot 2016-04-27 at 12.58.34 AM

Screen Shot 2016-04-27 at 12.58.38 AM

Then I tried the example 4 as I mentioned above, but it didn’t work. So I decided to find out who should be targeted by Beygency (if you are not sure what Beygency does, check this out- https://www.youtube.com/watch?v=rGxe83lXgJg). Their name and location will be provided to Beygency.

Screen Shot 2016-04-27 at 1.40.43 AM

Screen Shot 2016-04-27 at 1.40.50 AM