Tuesday, November 18, 2008

Week 13 Require Readings

http://www.noplacetohide.net/
This is from the Center for Investigative Reporting, which did a radio and television documentary on the use of intelligence gathering on citizens by the government. It has a variety of interviews from the documentary. It is also a book by Robert J. O'Harrow, Jr. The existence of the networks to track people and their purchases doesn't surprise me. When you get a credit report to buy a car, or rent an apartment your landlord can see every credit card, student loan, or late payment. I wanted to listen to the documentary but it was only in Real Audio format and I don't have the Real Audio player installed on my computer. If you have a public blog, myspace, facebook or any other social networking site, its no longer astonishing to find out employers will look for these when hiring new employees, or that they may be used for data mining.
The website did not note the last time it was updated.

http://epic.org/privacy/profiling/tia/#introduction
Total (Terrorism) Infromation Awareness (TIA) Last update in 2005.
This is about system that would collect informaiton on citizens without any prior reason or wrongdoing. It wouldn't give people the right to be left alone. It would have housed peoples medical records, to their gait, and would have been able to recognize a persons face from a distance. The government cut the funding for this data mining program in2003. The article points out that it doesn't mean they have stopped producing this kind of database.

http://www.youtube.com/watch?v=hS8ywG5M_NQ - This video is no longer avaliable due to a copyright violation.

Discussion Topic Readings:
http://www.youtube.com/watch?v=hS8ywG5M_NQ - Is Privacy Dead?

Possible to protect privacy and security at the same time. One example is using the "naked' x-ray machine by projecting the contraband on a sexless manaquin. I thought it was interesting the point he made that people want to control their exposure and not their privacy. Even with the use of cameras, I know recently it was a hot topic in Pittsburgh. http://www.post-gazette.com/pg/07178/797429-53.stm
Here is a link from a local blog on the project: http://pittsblog.blogspot.com/2007/06/pittsburgh-panopticon.html


Thursday, November 13, 2008

Final Post

I believe I have my 10 required readings, 10 muddiest points, and my 10 comments on other classmates' blogs. I have enjoyed keeping this blog. However, I noticed that when I was doing the readings and writing the blogs . . . I would travel to other websites. Readings that should have taken less time suddenly took more time to read. I recently listened to the NPR broadcast that kind of explained why it might be taking me longer to get readings done while on the internet. Here is a link: http://www.npr.org/templates/story/story.php?storyId=95524385

Week 11 Comments

https://www.blogger.com/comment.g?blogID=2401688410692832555&postID=5202000476982369583&page=1

https://www.blogger.com/comment.g?blogID=4736393327020365268&postID=6719120915827707176&page=1

Muddiest Point Week 10

I have a question about controlled vocabulary, I realize this might be a little off topic. When a term in a controlled vocabulary is changed, do the indexers use a find and replace to update the term in the indexed documents? If a new term is added, is it possible to go back through a database and add the new term to other documents it might describe without rereading every document?

Not a muddy point, but a comment on Yahoo's old style of categories that was mentioned in class. . . I can remember in late 90's I had submitted a website to Yahoo's search engine. If I remember correctly, I had to select the subjects it fell under.

Sunday, November 9, 2008

Wednesday, November 5, 2008

Muddiest Point Week 9

When an XML document is not well formed, what happens? Do you get an error message that lets you know what element might have been left out?

Saturday, November 1, 2008

Required Readings Week 11

Here is a link to the PA Digital Library.
http://padl.pitt.edu/index.php/index

Dewy Meets Turing

In this article the advantages gained by the computer science (CS) field and the library science (LS) world are discussed. These were a direct result of the National Science Foundations launch of the Digital Libraries Initiative (DLI). The DLI changed the way we use digital resources.

CS were able to impact the daily lives of library users, such as moving the card catalog from the shelf to the web. This has led to instant access/locateability of resources from around the world. It has also led to information being published at a greater speed. Instead of the lag between a scholarly article being accepted for publication and published being a year, it can now be published instantly.

Libraries thought they would be able to gain funding for these projects because of the DLI. However, they ended up feeling like the computer science field used all of the grants.

There were problems between the CS and LS fields. CS couldn't understand the importance of fields for metadata. They thought a simple search algorithm would take care of the problem.

Digital Libraries
This article describes the growth of digital libraries and the sources for funding. A major accomplishment of the DLI is the creation of standards in digital libraries. This program also had a hand in creating Google and the Open Archives Initiative for Metadata Harvesting (OAI-PMH).

The project at Illinois involved the use of scholarly journals on the web. Current online journals still use some of the innovations that came out of the project.

Institutional Repositories: Essential Infrastructure fo Scholarship in the Digital Age.

Universities are now keeping a repository of works authors publish. MIT developed open source software for repositories of papers, which lowers the cost of producing this type of database.

The author has concerns that policy might make placing information into the repository more work. However, these depositories will perhaps create standards for preservable formats, identifiers, and Rights Documentation and management.

When searching for a journal article that Pitt might not have, I never thought to look at the institution where a author works. I often searched to see if they had a web page.

Wednesday, October 22, 2008

Muddiest Point Week 8

I'm not sure I understand how to upload the html files for my webpage onto the FTP server at Pitt. Are there instructions for this somewhere on Pitt's webpage?

I am going to look and see if there are any, and if so I'll post a link to the information.


So far I have found this document: http://technology.pitt.edu/Documentation/html_inst.pdf

Monday, October 20, 2008

Week 9 Required Readings

All of the readings for XML left me confused. I felt like there was some simple part I was missing or was not explained. In the resources for the IBM reading on XML there was a link for a Intro to XML tutorial. I found doing this tutorial which is free, but you have to register with IBM, helped me understand just what the creation of XML was for.

A user can create specific tags with in a document to denote for example if it is a title or a postal code. This makes finding these elements of a document easier.

A tag is anything between brackets - < >
An element is a type of a tag. If you break down the tag , it could include <blue>, <red>, <green>, etc.
An attribute - is a name value pair. I can't think of a good example of this.

It enable records to be read faster by different pieces of software. It also makes the web easier to search. If you are looking for postal codes it will somehow know to only search for fields labeled postal code.

The document must follow a specific set of rules that would be set down in a DTD (document type definition).

XML is more strict than HTML. You need to make sure all of your elements are ended, in the right order, and they are case sensitive, otherwise you will get an error message.


Introduction to XML: http://burks.bton.ac.uk/burks/internet/web/xmlintro.htm
This gives some background on XML, but I didn't understand what they meant. I needed some physical examples. Which is why I liked the tutorial I did above, even though it too ended up over my head.

A survey of Standards: http://www.ibm.com/developerworks/xml/library/x-stand1.html
This gives a list of standards for XML that have been made over time by various authorities on the internet. I found the links in this document to be rather helpful.

Introduction to XML Schema: http://www.w3schools.com/Schema/schema_intro.asp
This is like the HTML tutorials from last week. It gives examples of what the XML documents would look like and examples of how to code them.

Extending Your Markup: An XML Tutorial
This is another explanation of the elements that make up XML.



Thursday, October 9, 2008

Week 8 Comments

Lauren's
https://www.blogger.com/comment.g?blogID=7036399065753048748&postID=491126717523791674&page=1


Intro to IT
https://www.blogger.com/comment.g?blogID=4487027148249158402&postID=4795457760235039192&page=1

Week 7 Muddiest Point

When I am in Downtown Pittsburgh, my IPod sometimes picks up a "Free Wi-FI" connection that I do not believe is the free two hours a day WiFi. When I looked it up online, I found out some of these may be up to nefarious things, and that when you are connected to most public wi-fi spots your computer may be at risk. Is their anyway to protect your computer and information? Also how can you tell the difference between a legit WiFi connection and one that is up to no good?

Wednesday, October 8, 2008

Assignment 5 - Koha

Here is my link to assignment 5. My topic of interest was books for a Consumer Health Library.

http://pitt4.kohawc.liblime.com/cgi-bin/koha/bookshelves/shelves.pl?viewshelf=8

Week 8 Required Readings

http://www.w3schools.com/HTML/ - W3 HTML Tutorial
This is a tutorial teaching the elements of html. It introduces them slowly, one by one. Each building on the other. The first and last time I used html was in 1997. I had just stolen bits of code from other source codes on websites. This gives a clear example of what each piece of html is and does.

The article points out that upper vs. lower case letters are unimportant, but the WC3 thinks that lower case letters should be used, just in case they ever change their minds.

A lot of the commands in HTML are pretty straight forward, italics is represented by an "i", subscript is "sub" and so forth. I liked that "try it" editors were available to mess with the code in a hands on fashion.
HTML Cheatsheet - http://www.webmonkey.com/reference/HTML_Cheatsheet/
This document provided a list of easy to use HTML codes.

3.) Learning CSS: http://www.w3schools.com/css/css_intro.asp
CSS stands for Cascading Style Sheet, this coding was developed to take care of the problem with using font formats in html. Apparently this was expensive for web designers to use.
Using a style sheet seems to make the coding of colors and backgrounds easier. Instead of having to redo that code in every web page, these are files where they can be inserted, and will show the font how they want it to be displayed. This page is a good example of how this works.

It works like the HTML tutorial by giving specific examples to each element of a web page.


Beyond HTML
This articles looks at an academic library that let their librarians run free with Front Page to create web pages about their part of the library. I initially thought this would lead to creativity in each department. Everyone would have their own unique online voice. However, they were given little to no training. This lead to every page being different, which made it difficult for students to use the webpages to locate information. The new CMS uses CSS coding. This innovation led to a more uniform approach to the librarians web pages. It also made it easier for the students to navigate.

The way they went about implimenting it, I thought was practicle. It is hard to switch everyone over to a new system at once, so doing it piecemeal made sense. Also, they were training people how to use it, which was a step ahead of the old Front Page system.