domingo, 11 de setembro de 2016


The Rise of the World Wide Web

Read the text below and answer these questions: 

        By the early 1990's, people were using computers in many different ways. 
Computers were already installed in most schools, offices, and homes. 
They were commonly used for writing papers, playing games, financial accounting,
 and business productivity applications. But very few people used them for 
communication, research, and shopping the way we do now. A man named
 Tim Berners-Lee changed all that. In 1990, Lee added an exciting hypertext
 and multimedia layer to the Internet and called it the World Wide Web. The rest, 
as they say, is history.
     Believe it or not, the Web was not the first attempt at building a worldwide 
online community. Cutting edge geeks have been using online services such as
 Compuserve all the way back to the early 1980's. There were thousands of other
 privately run Bulletin Board Systems (BBS) as well, which served the general
 interest of curious nerds and researchers from around the world. Perhaps the 
most ambitious project was the French system Minitel, but it never caught on in the
 rest of the world and eventually faded into obscurity. Experiences on these BBS
 was poor by today's standards. There was no graphics or even color. There was 
no sound except of course the obnoxious beeps and gurgles a modem makes when
 it it initiates a dial-up connection to a server. Bandwidth was also very slow compared to today's speeds. Typical operating speeds were between 300 and 1200 baud. Today, a typical broadband connection is thousands of times faster than this.
     The Web was not built for geeks. It was built for everyone. It was built with very high ideals. No single company, government, or organization controls it. It was new and exciting. 
New ideas and words appeared almost daily. Obscure technical terms became 
household words overnight. First it was email. Then it was URL and domain name. Then rather quickly came spam, homepage, hyperlink, bookmark, download, upload, 
cookie, e-commerce, emoticon, ISP, search engine, and so on. Years later we are still 
making up new words to describe our online world. Now we "google" for information. 
We "tweet" what's happening around us to others. The new words never seem to stop!
    Just because the web seems so chaotic and unorganized compared to more structured companies and governments, doesn't mean it's total anarchy. In 1994, Tim Berner's Lee started the W3C, a worldwide organization dedicated to setting standards for the Web. 
This group is probably the most respected authority for what should and should not be Web standards. W3C's mission is to lead the Web to it's full potential.
   As a student of English and Technology, you will hear people use the words 'Internet' and 'World Wide Web' almost interchangeably. They are, of course, not the same thing. So what is the difference between the two? Perhaps a simple answer is that the Internet is the biggest network in the world, and the World Wide Web is a collection of software and protocols on that network. I guess a more simple way to put it is, the World Wide Web is an application that runs on The Internet.
    The original backbone of the Internet is based on an old military network called ARPANET which was built by ARPA in the late 1960's. ARPANET was built so information could withstand a nuclear war. The idea was not to have a single point of failure. This means if part of the ARPANET was blown up in a nuclear war, the rest of it will still work! What made ARPANET so successful was it's packet-switching technology, invented by Lawrence Roberts. 
The idea is that "packets" of information have a "from" address and a "to" address. How they get from point "a" to point "b" depends on what roads are open to them. Packet switching is a very elegant thing. Without it, the Internet would simply not work.
   People view the World Wide Web through a software application called a web browser or simply a "browser" for short. Some popular examples of web browsers include Microsoft Internet Explorer, Google Chrome, Mozilla Firefox, and Apple Safari. Browsers allow people to search, view, and even add and edit data on the World Wide Web.
    The Web is not supposed to be a passive experience. Creating new pages for the Web is getting easier all the time. Web editing software is specially designed to work with hypertext languages such as HTML, which is the original specification for the Web. Web editing software normally allows for the WYSIWYG creation of text, images, and hyperlinksbetween related documents. With web applications such as wikis, MySpace and FaceBook, a typical user can create his or her first 
online presence in a matter of hours.
     In the year 1999, the Internet suffered it's first financial crash. Many companies selling productsand services on the Web were not living up to sales expectations. This was known as the Dot Com Bubble. There were many reasons why this happened, but perhaps the two most important reasons were a combination of slow connection speeds and too much optimism. Very few people had fast internet connections and many people thought the Internet was "just a passing fad". 
But we know now that the Internet is not a fad. So what happened? Web 2.0 happened!
     What is Web 2.0? It's very hard to say. It's just a phrase to describe a transition from the pre-existing state of 'Web 1.0', which was slow, static, and unusable, to a new, 'second web', which was faster, more dynamic, and more usable for the average person. How did these things happen? Easy. Broadband modems enabled sites like video-streaming YouTube to become possible. Better design and development practices enabled social media sites like MySpace and then Facebook to attract hundreds of millions of users. Finally, search engine technology improved on sites like Google where people could actually find the information they were looking for.
     What will be the future of the Web? Easy. More speed and more power. In the future, digital distribution on the Internet is likely to replace all other forms of media distribution including CDs, DVDs, and even radio and television broadcasts.
     I personally feel lucky to be alive in the age of the Web. It is one of the coolest things ever invented. It is unlikely that such another wonderful and major revolutionary invention will occur in our lifetime. But I can still dream about the Next Big Thing. And who knows? Maybe you will invent it.

terça-feira, 6 de setembro de 2016


      Computer Hardware Peripherals 

After reading the following article, asnswer these questions:

a- What´s the difference between peripherals and components?
b- What´s the main purpose of peripherals in a computer?

    Peripherals are a generic name for any device external to a computer, but still normally associated with it's extended functionality. The purpose of peripherals is to extend and enhance what a computer is capable of doing without modifying the core components of the system. 
    A printer is a good example of a peripheral. It is connected to a computer, extends its functionality, but is not actually part of the core machine. Do not confuse computer peripherals with computer accessories. An accessory can be any device associated with a computer, such as a printer or a mousepad. A printer is a peripheral, but a mousepad is definitely not one. A mousepad does not extend the functionality of a computer, it only enhances the user experience. 
     Peripherals are often sold apart from computers and are normally not essential to its functionality. You might think the display and a few vital input devices such as the mouse and keyboard would be necessary, but certain computers such as servers or embedded systems do not require mice, keyboards, or even displays to be functional. 
    Peripherals are meant to be easily interchangeable, although you may need to install new drivers to get all the functionality you expect out of a new peripheral device. The technology which allows peripherals to work automatically when they are plugged in is called plug and play. A plug and play device is meant to function properly without configuration as soon as it is connected. This isn't always the case however. For this reason some people sarcastically refer to the technology as 'plug and pray'. Still, plug and play was a big deal when it was introduced in the 1990's. 
    Before then, installing a new peripheral could take hours, and could even require changing some jumper settings, DIP switches, or even hacking away at drivers or config files. It was not a fun time except for real hardware geeks. With plug and play technology, all the nasty jumpers and DIP switches moved inside the peripheral and were virtualized into firmware. This was a clear victory for the common, nontechnical person! Peripherals normally have no function when not connected to a computer. They connect over a wide array of interfaces. 
    Some common ones from the past include: PS2 ports, serial ports, parallel ports, and VGA ports. These are all being replaced by some new standards including USB, Bluetooth, wifi, DVI, and HDMI ports. The most common peripheral linking device is probably USB technology. Why? USB is good because you can daisy chain a lot of peripherals together quickly, it is quite fast and growing ever faster in recent editions, and it even provides enough power to supply some smaller peripheral devices like webcams and flash drives. Some peripherals are even used for security.
    A good example of this is the dongle. The dongle is often used to protect very expensive applications from software piracy. Here is a list of common peripherals you should be familiar with as an IT professional. Keep in mind the list is always changing due to changing technologies: - monitors or displays - scanners - printers - external modems - dongles - speakers - webcams - external microphones - external storage devices such as USB-based flash drives and portable hard disk drives - input devices such as keyboards, mice, etc are normally considered peripherals as well Now you know a little more about peripherals and what makes them different from components and accessories. 



a- What´s the difference among ALPHA VERSIONS, BETA VERSIONS AND RELEASE CANDIDATE VERSIONS when we talk about computer apps?

b- Considering the distribution of apps nowadays, what do you understand by "shareware, freeware, upgrade versions and open source?
    Without software applications, it would be very hard to actually perform any meaningful task on a computer, unless one was a very talented, fast and patient programmer. Applications are meant to make users more productive and get work done faster. Their goal should be flexibility, efficiency, and user-friendliness.
     Today there are thousands of applications for almost every purpose, from writing letters to playing games. Producing software is no longer the lonely profession it once was, with a few random geeks hacking away in the middle of the night. Software is a big business and the development cycle goes through certain stages and versions before it is released.
   Applications are released in different versions, including alpha versionsbeta versionsrelease candidates, trial versions, full versions, and upgrade versions. Even an application's instructions are often included in the form of another application called a help file.
      Alpha versions of software are normally not released to the public and have known bugs. They are often seen internally as a 'proof of concept'. Avoid alphas unless you are desperate or else being paid as a 'tester'.
     Beta versions, sometimes just called 'betas' for short, are a little better. It is common practice nowadays for companies to release public beta versions of software in order to get free, real-world testing and feedback. Betas are very popular and can be downloaded all over the Internet, normally for free. In general you should be wary of beta versions, especially if program stability is important to you. There are exceptions to this rule as well. For instance, Google has a history of excellent beta versions which are more stable than most company's releases.
     After the beta stage of software development comes the release candidates (abbreviated RC). There can be one or more of these candidates, and they are normally called RC 1, RC 2, RC 3, etc. The release candidate is very close to what will actually go out as a feature complete 'release'.
     The final stage is a 'release'. The release is the real program that you buy in a shop or download. Because of the complexity in writing PC software, it is likely that bugs will still find their way into the final release. For this reason, software companies will offer patches to fix any major problems that end users complain loudly about.
    Applications are distributed in many ways today. In the past most software has been bought in stores in versions called retail boxes. More and more, software is being distributed over the Internet, as open source, sharewarefreeware, or traditional proprietary and upgrade versions.