Audio News Assignment

Remember we’ll attend a CCI Diversity and Inclusion Week event Thursday! Due to us not to being in class Thursday, this assignment will be due on Tuesday, October 4.

podcast-microphone

The assignment is to create a short radio news story (~1:00) about Illinois naming September 26 Gold Star Families Day.

We are using an example from Illinois, because they provide much easier access to raw audio from the Governor’s events than Tennessee.

You will create a radio news story including:

  • Narration (which you will have to write and record)
  • Audio from the event and from a press gaggle with the Governor after the event (both provided)

Once you created your audio story, you will create a corresponding print story in which you will embed your audio story. Use SoundCloud to embed your audio.

You can find all the raw info you need to complete the assignment at this link (on Mac control click and choose “save as…”). It includes:

  • Audio from the event (17 minutes)
  • Press gaggle with Governor after event (6 minutes)
  • 2 photos that are clear to use in the print story
  • Press release from the Governor’s office about Gold Star Families Day

If you need more background on Gold Star Families, you can learn more about them at this website. If you don’t know how to use Audacity for audio editing learn here.

Writing to the web using WordPress

 Here is a quick, meta guide to using WordPress for publishing stories. 
Screen Shot 2016-08-29 at 10.35.25 PM

WordPress is one of the most used content management systems out there. WordPress is perfect for create a simple blog or portfolio site. It can also be used for much bigger projects, but that is another story for another day.

Step 0: The first thing you have to figure out is where you want your WordPress site. Here are some options:

  1. On WordPress.com - WordPress offers free hosting on their servers. There are numerous limitations, including: little customization, weird address (i.e., yourname.wordpress.com) and limited space for photo hosting.
  2. On your own site (auto install) – Allows you to use a customized domain name (i.e., nickgeidner.com), but is still limited in customization.
  3. On your own site (manual install) – Allows you to do anything you want.

Let’s assume that you are going to use WordPress.com to host your site.

Step 1: Go to WordPress.com and choose “Create Website.” WordPress will then ask you a handful of questions about your blog. The second question will be about the layout of the site. Basically, it is asking if you would like a blog, a static website, or a portfolio. Choose wisely. Then you’ll choose a theme, a name and a couple other things.

Screen Shot 2016-08-29 at 11.07.30 PM

Step 2: Finished answering the questions? Then your site should be ready to go. WordPress will take you to your site’s landing page, or as they call it the dashboard. To start a new post, click the “Add” button next to the “blog post” tab.

Screen Shot 2016-08-29 at 11.21.45 PM

Step 3: You can start your story with a clear 5–10 word headline placed in the “Title” area of the page. Next, your 20-word summary can be placed at the top of the box for the body copy. 

Screen Shot 2016-08-30 at 7.31.40 AMStep 4: Following the lead of BuzzFeed or Vox, let’s add a photo right under the summary. Click the “Add Media” button on the left side of the toolbar. Choose the photo you want to add. Once it is in the story, click on the picture. You will notice a number of display options. Play around with them.

Step 5: Once you add your photo, you can start typing the body copy for your story. Remember short — one to two sentence — paragraphs.

You might also want to format some of your text, such as adding links. Select the text to which you want to add the link or other formatting. Then you can use the toolbar at the top of the page to add links, block quotes, etc.

Step 6: Now all you need to do is publish the story. Click the publish button in the top right and you’re done.


FOR NEXT CLASS: Post the story you wrote for today’s class on Medium.

To complete the assignment, you’ll need a photo of an EpiPen, right? So you just go to Google Images and find something, right? NO! NO! NO!

You need to find a legal photo to use. We’ll talk more about more rights later in the semester, but for now we’ll using some filtering options in Google to find copyright-free photos.

Screen Shot 2016-08-29 at 10.29.10 PMStep 1: Go to Google Images, and search for EpiPen.

Step 2: Click the “Search tools.”

Step 3: Then click the “Usage rights” button and select “Labeled for noncommercial reuse.” Once you click it, Google will update the image search with only images you can use for class assignments.

 

Scraping with OutWit Hub

I wrote the below post for my introductory data journalism class at the University of Tennessee. It provides an example of how to use OutWit Hub to scrape information for numerous pages structured in the same manner.

————

The other day during class I scraped data the Congressional Medal of Honor Society’s website using Outwit Hub. Ahead of today’s assignment, I figured I would pull together a guide on how I did it.

Step 1. We need to decide what we want and where it is.

I want some descriptive data about living medal of honor recipients in order to provide some context to the reporting we are doing at the Medal of Honor Project. Specifically, I want the recipients’ name, rank, date of birth, date of medal-winning action, place of action, MoH issue date, place of birth and place of action.

If I choose “Living Recipients” from the “Recipients” tab, I see this:

Screen Shot 2014-03-03 at 4.20.44 PM

If this screen had all the information I needed, I could easily use the Chrome Scraper extensions to grab the data. Unfortunately, I want more information than their name, rank, organization, and conflict. If I click on one of the entries, I can see that all the information that I want is on each entry page.

mohscrape

 

So now we know that we want to grab a handful of data from each of the pages of the living recipients.

Step 2. Collect the addresses of all the pages we want our scraper to go to (i.e., the pages of all the living recipients).

We can do this in a number of ways. Since there are only 75 living recipients, across three pages, we could easily use the Chrome Scraper Extension to grab the addresses (see this guide if you forget how to use it).

Screen Shot 2014-03-03 at 4.45.28 PM 

Since I am using this project as practice for grabbing data from the pages of all 3,463 recipients, I decided to write a scraper in Outwit to grab the addresses.

To write a scraper, I need to tell the program exactly what information I want to grab. I start this process by looking at the coding around the items I want using the “Inspect Element” function in Google Chrome.

Screen Shot 2014-03-03 at 5.01.04 PM

If I right-mouse click on the “view” link and click “Inspect Element,” I will see that this is the line of code that relates to the link:

<div class="floatElement recipientView"><a href="http://www.cmohs.org/recipient-detail/3219/baca-john-p.php">view</a></div>

This line of code is all stuff we have seen before. This just a <div> tag with an <a> tag inside it. The <div> is used to apply a class (i.e., floatElement recipientView) and the <a> inserts the link. The class is unique to the links we want to grab, so we can use that in our scraping. We just need to tell Outwit Hub to grab the link found within any <div> tag of the recipientView class.

In Outwit, we start by loading the page we want to scrape.

Screen Shot 2014-03-03 at 5.17.34 PM

Then we want to start building our scraper by choosing “Scrapers.” When we click into the scraper window, we will have to pick a name for our scraper. I chose “MoH Links.” You will also see that the CMOHS website has flipped in to a code view. We will enter the directions for our scraper in the lower half of the screen, where it says description, marker before and marker after.

Screen Shot 2014-03-03 at 5.23.58 PM

We just need one bit of info, so our scraper in simple. I entered:

  • Description = Link
  • Marker before = recipientView”><a href=”
  • Marker after = “>

You can then hit “Execute” and your scraper should grab the 25 addresses from the first page of living recipients. But remember, I don’t want the addresses from just the first page, but from all three pages.

To do this, I need to step back, get super meta, and create a list to make a list. If you go to the second page, it is easy to see how these pages are organized or named. Here is the address for the second page of recipients:

http://www.cmohs.org/living-recipients.php?p=2..

Not shockingly “p=2″ in english is “page equals two.” A list of all the address is simple to derive.

http://www.cmohs.org/living-recipients.php?p=1..

http://www.cmohs.org/living-recipients.php?p=2..


http://www.cmohs.org/living-recipients.php?p=3..

If you create this list as a simple text file (.txt), we can bring this into Outwit Hub and use our scrape on all of these pages.  After I create the text file, I go to Outwit choose “File > Open” and select the text file. Next, select “Links” from the menu on the right-hand side of the screen. It should look like this:

Screen Shot 2014-03-03 at 5.40.21 PM

Now, select all the links by using Command+A. Then right-mouse click and choose “Auto-Explore Pages > Fast Scrape > MoH Links (or whatever you named your scraper).” OutWit should pop out a table that looks should of like this:

Screen Shot 2014-03-03 at 5.51.05 PM

YOU JUST RAN YOUR FIRST SCRAPER!!!

Way to go!

Now just export these links. You can either right-mouse click and select “Export selection” or click “Catch” and then hit “Export.” I usually export as a Excel file. We’ll eventually have to turn this file in to a text file, so we can bring it back into OutWit. For now, just export it and put it to the side.

Step 3. Create a scraper for the data we actually want.

We are going to start with “Element Inspector” again. Remember, we want to find unique identifiers related to each bit of information we want to grab. I went in an look at each piece of information (e.g., Issue Date) and looked at the coding around the information.

Screen Shot 2014-03-03 at 6.01.04 PM

If you run through each of the bits of information we are grabbing, you start seeing a pattern in the way the information is coded and unique identifiers for each piece of information. For example, the code around the “Date of Issue” looks like this:

<div><span>Date of Issue:</span> 05/14/1970</div>

And it looks like that on every page I need to scrape. So I can enter the following information into a new OutWit scraper – I called this one MoH data – in order to grab the date:

  • Description = IssueDate
  • Marker before = Issue:</span>
  • Marker after = </

OutWit will grab the date (i.e., 5/14/1970) which is all the information between the “>” after the span to the “</” which closes the span.

Just about every piece of information we want has a label associated, which makes it very easy to scrape. I just went through and created a line in OutWit for each piece of data I wanted, using the label as the marker before.

The only piece of information that doesn’t have a label is the name. If you right-mouse click on it and choose “Inspect Element,” you will see that it is surrounded by an <H4> tag. If you use the Find function (command+F), you’ll see that the name is the only item that has an <H4> tag associated with it. So we can tell OutWit to grab all information between an <H4> tag, like so:

  • Description = Name
  • Marker before = <H4>
  • Marker after = </

Screen Shot 2014-03-03 at 6.29.01 PM

Once I got my scraper done, I hit the “Execute” button to see if it worked. It did!

Step 4. Now I just need to tell OutWit where to use my new scraper.

Go back to the Excel file you create in Step 2. Copy the column of links and paste them into a new text file. Save this new text file. I called mine mohlinks2.txt.

Screen Shot 2014-03-03 at 6.32.13 PM

Next we open up OutWit. Before actually start scraping we need to deal with a limitation of the free version of OutWit. You can only have one scraper assigned to a given web address in the free version. So we need to change “MoH Links” (our first scraper), so it is not associated with cmohs.org.

Open up “MoH Links” on the “Scrapers” page of OutWit. Below where it says “Apply If Page URL Contains” there is a box the contains “http://www.cmohs.org.” Delete the address from that box and save the new “MoH Links” scraper. Now go into the “MoH Data” scraper and enter the cmohs address in the same box, save the scraper, and then close and reopen OutWit.

Screen Shot 2014-03-03 at 6.44.43 PM

Next open the mohlinks2.txt. Select all the links (command+A) and choose “Auto-Explore Pages > Fast Scrape > MoH Data (or whatever you named your scraper).” Slowly but surely OutWit Hub should go to each of the 75 pages in our links text file and grab the bits of information we told it to grab. Mine worked perfect.

All that you need to do is export the data OutWit collected and then you can go into to Excel to start cleaning the data and pulling information from the data.

Although this first one probably seemed a bit rough, you will get used to how information is structured in websites and how OutWit works over time.

Video Production – In-class Assignment

Today for the first half of class, we will run through basic operation of a DSLR for video production, and then each of you (in groups) will shoot a short video. The video will be a companion piece to a story being produced by your organization about Land Grant Films, a documentary production house being created at the University of Tennessee.

The author of the print piece has provided you with the press release she received from the university.

Oct. 28, 2015

UT Professor Launches Philanthropic Documentary Brand

KNOXVILLE — The Medal of Honor Project, a collaboration between University of Tennesee, Knoxville, School of Journalism and Electronic Media and the 2014 Medal of Honor Convention, sparked an interest in directing for a UT journalism professor.

Nick Geidner, assistant professor of journalism, is launching and directing a new project he’s calling Land Grant Films, a logical extension of the Medal of Honor project, also directed by Geidner. The project is based in the School of Journalism and Electronic Media.

“My goal with Land Grant Films is to provide students with real world documentary
experience while getting them engaged in organizations and issues that affect the community,” said Geidner.

The project will provide students with real-world experience in documentary storytelling and also give local non-profits video assets that can be used to raise awareness and funds for their cause.

Land Grant Films already has several projects in the works. They are working on films for several local organizations, including the Boys and Girls Club of the Tennessee Valley, Metropolitan Drug Commission, Tennessee Paracycle Open and Joy of Music School.

“Students involved in our films get to work on all aspects of the production, from running camera and conducting interviews to scripting and editing the film,” said Geidner. “It is an intense, hands-on experience that gets the students ready for a job in the video production field.”

For more information on Land Grant Films, visit www.landgrantfilms.org.

# # #

A producer has reached out to Geidner and he has agreed to an interview at his office. He is very, very important, so you will only have 15 minutes to interview him and shoot b-roll all the necessary b-roll.

I will post each groups’ raw video to Vimeo. From the raw video you will write a script.

Video storytelling

Boyd Huppert is unquestionably one of the best video storytellers in the business. As a matter of fact just last week, Huppert won two Murrow Awards (writing and feature story). Here is one of his most recent stories, part of his Land of 10,000 Stories series:

Then here is his Murrow-award winning feature story (also part of the Land of 10,000 Stories series):

Screen Shot 2015-10-21 at 3.40.18 PM

But broadcast journalism is not only good at feature stories. It can also be used for important investigative journalism. “Injured Heroes, Broken Promises” by KXAS and the Dallas Morning News investigates failures at the Warrior Transition Unit at Fort Hood. It won the 2014 SPJ Sigma Delta Chi and 2015 Murrow Award for investigative journalism.

BuzzFeed founder’s email about NBCUniversal investment

Here’s the full email Jonah Peretti, BuzzFeed’s founder and CEO, sent to BuzzFeed staff about the NBCUniversal investment.

Hello BuzzFeeders, 
I’m very excited to share that NBCUniversal has agreed to invest $200 million in BuzzFeed and partner with us to extend our reach to TV and Film. NBCU is the home of the Today Show, Jurassic World, the Minions, the Olympics, Jimmy Fallon and much more and we are looking forward to collaborating with them on projects we’d never be able to do on our own.

  
We’ve also signed an agreement with Yahoo! JAPAN to launch BuzzFeed Japan as a joint venture based in Tokyo. Yahoo! JAPAN is the leading digital media company in a huge market, reaching almost all of the online population in Japan. Partnering with them allows us to grow much more quickly in Japan than if we launched on our own. You can read more about our strategy from Greg’s blog post.
Additionally, we’ve executed a series of partnerships with the leading digital platforms, including Facebook’s Instant Articles, Snapchat’s Discover, Apple’s forthcoming News app, with more to come. These partnerships allow us to reach a bigger audience and have a bigger impact than what would be possible on our own. 
All these deals were structured to assure BuzzFeed’s continued editorial and creative independence. Equally important, the investment from NBCU and our rapidly growing revenue assures our financial independence, allowing us to grow and invest without pressure to chase short term revenue or rush an IPO. Our independence and a long term focus align us with our readers and viewers and help us deliver the best possible service for our audience. 
This is also great news for all BuzzFeed employees. There will be many opportunities for career development and growth as we expand in new areas and take on new challenges. Your work will have a bigger impact than ever before, spreading to more countries, across more platforms, in more formats. As a result of these deals, the work you are doing will play a bigger role in the lives of an even larger, more diverse, global audience. 
I’m sure you have lots of questions and I encourage you to submit them anonymously here. Today at 12p EST we’ll have a Global All Hands where I’ll answer your submitted questions and will be able to take live questions in NYC. Tonight at 7:30pm EST, I will do an all hands for the Sydney office. You should have a calendar invite with all the necessary information. My team and I will also answer questions in the new Slack channel #AMA at 3pm EST if you have more questions. If you don’t have Slack, click HERE to sign up and email helpdesk with any problems. 
One final point that is very important. None of this would be possible without the amazing work that all of you have done building BuzzFeed. Your inspired work in news and entertainment, tech and product, business and sales, across the U.S. and in countries around the world, has resulted in something truly remarkable and unexpected. So thank you and I can’t wait to be surprised and amazed by what you create next. 
Now let’s go have some fun! 
Jonah

Via @chrisgeidner

Deadline online graphic

As a class (or group; depending on how many people show up), you must create an interactive graphic to go along with a story about income disparities in America.

The print article is not finished yet, but the author sent you over the following info:

  • the article draws heavily from this report by the U.S. Census bureau
  • the author talks generally about disparities, but also focuses on gender and race disparities and how these have changed over time

Your editor wants to post this as a quick, web story in the next couple hours. The graphics editor has suggested you use Juxtapose.JS as a novel way to demonstrate income disparities.

You have the rest of class to create something and post it to GitHub.

HINTS:

  1. Juxtapose needs two similarly sized images.
  2. You can host image on Google Drive by (1) uploading it, (2) setting the privacy option to “anyone on the web,” and (3) using the following address https://drive.google.com/uc?id=0B2McuVJ6osMBaWc2dTEyWFBYU1U” Just replace the last crazy number with the file ID for your image.

Homework for Tuesday, April 14

For Tuesday you will recreate this webpage. You will use the following:

  1. The U.S. Department of Education’s Equity in Athletics dataset
  2. Excel – to clean and prep the data
  3. Refine or Excel Tableau Plugin (PC only) – to format the data
  4. Tableau Public – to build the graphics
  5. Text editor – to code the webpage
  6. GitHub – to host the webpage

I have provided hints for each step in the process below. Try to do it yourself, but if you really get stuck use the hints.

url

Step 1 Hint

Think of what you need to complete the graphic.

  • Data from 2003 to 2013 on every sport (male and female) for a single school (i.e., UTK)

Now you know what you need you should be able to find it pretty easy using the “Download selected data” tool on the Equity in Athletics page.

Step 2 Hint

Again ask yourself, “What do I need?”

We just need the data for each year for each sport, so we can dump a lot of data in the default dataset, like all the sports UT doesn’t have, the totals for each sport and all of Row 1.

Once we get rid of all the extraneous data, we just need to do one extra thing: transpose the rows and columns. We need to do this as step one of the process of formatting the data for Tableau.

Here are instructions on transposing data in Excel.

Extra hints: (1) We are still missing one piece of data. We need a variable for gender, so we can color code the boxes. It can just be a simple binary variable (i.e., 1 and 0). (2) We should also shorten the names of each sport in Excel. It is much easier to do it here than in Tableau or Refine. So instead of “Baseball Men’s Team Expenses” cut it down to “Baseball.”

Step 3 and Step 4 Hint

This video will walk you through the process of designing the graphic in Tableau Public. It will also explain how to format the data using the Tableau Excel Plugin, which is PC only. If you are using a Mac, you will have to use Google Refine to reshape the data. Here is where you can download Refine, and here is how to use it to reshape the data into the format we need.

Step 5 Hint

We need to build a simple website with three elements.

  1. A headline
  2. Body text
  3. Graphic

The graphic is easy. We use the embed code from Tableau. The headline and body text are also pretty simple. Remember we can add style to any tag. So for the headline, we can just place a <div> tag before it with style, like so:

<div style="font-family:Georgia;font-size:300%">The Ever Growing UT Athletics Budget</div>

Then we can do the same thing with the body copy, but changing the style applied.

Step 6 Hint

Remember, to create a project page on GitHub, all we have to do is create a “gh-pages” branch and then add an “index.html” to that branch.

Here are instructions from GitHub.

 

Homework

For Thursday, I want you to adapt the code we looked at in class today. Use this code as a basis. Here are the things I want you to do.

1) Add three more data points. Make up the data. Play with numbers much higher than our current scale.

2) Change the color of the axes. Make them red.

3) Make the radius of the circles scaled based on the number of Pulitzers winners and finalists.