GROUP 5 COPY

GROUP 5
ONLINE JOURNALISM STRIVES ON THE DEVELOPMENT OF WEB TECHNOLOGIES. GIVE A HISTORICAL ANALYSIS OF THE DIFFERENT TECHNOLOGIES THAT GAVE BIRTH TO EACH OF THE WEB (S). DISCUSS HOW EACH DEVELOPMENT AFFECT THE PRACTICE OF ONLINE JOURNALISM?

MEMBERS
MA’ARUF MUSTAPHA -                                                                            U13MM1017
SANUSI IBRAHIM ALMANZAWLY –                                                         U13MM2023
OBOR VINCENT OTUMALA M.-                                                                U13MM1023
AHMED HABIB –                                                                                          U13MM1099
SANI ISYAKU ISYAKU ISYAKU –                                                                  U14MM2056
ADO ABBA SULAIMAN –                                                                             U14MM 2059
ZUBAIRU ZUBAIRU MUDASHNU-                                                              U13MM1146
SHUAIBU AISHA YAHAYA –                                                                         U13MM1088
YUSUF KAYODE –                                                                                          U13MM1188
OCHE EDACHE COLEMAN-                                                                          U13MM1106
ABDULLAHI USMAN –                                                                                  U13MM1118
JOGAI ABRAHAM PAUL –                                                                             U13MM1104
HUSSEIN FATIMA BINTA –                                                                           U13MM1017
IBRAHIM SHAFAATU SHUAIBU-                                                                  U14MM2017
MICHEAL  CYNTHIA –                                                                                    U13MM1085
OBEBE LADI VERONICA-                                                                               U13MM1078
OLURONTOBA OLUWATOSIN DEBORAH –                                               U13MM1117
ONDA UWANYA ROSE  –                                                                               U13MM56
KOMOLAFE  SHADE WINNER –                                                                    U13MM1044
ANYADIUFU GLORIA CHIZOBA –                                                                   U14MM2029
ZAMBUA JOSEPH JOHN                                                                                  U13MM1007
INTRODUCTION
Technological advances have been linked to the ability to disseminate information rapidly. This began from the invention of printing press and subsequently the emergence of broadcast media such as radio and television then the present dominant media which is the internet. However, new media do not develop on their own but instead morph from something that already exists, which means that each new medium rides on the shoulders of an existing medium.
It is no doubt that the internet technology has taken over the practice of journalism, as the internet changes, journalists equally change the way they package and disseminate news to the public. This has also made the world a small place for journalists to explore and has made research on varying subject matter available via chatting and electronic exchange of messages and has created a new market and forms of online journalism where journalists specialize as online journalists.
The World Wide Web over the past few years has utterly transformed the world of journalism. It has given the profession a more incisive experience and has compressed the timescales for journalists, especially in the aspect of newspaper production where the staff now learn to write for online newspapers in what is called digital publishing. 
HISTORICAL ANALYSIS OF THE DIFFERENT TECHNOLOGIES THAT GAVE BIRTH TO THE WEBS

To understand emerging web trends, it is helpful to understand internet history and how it has evolved into what some call the dawn of the age of information. The internet was an evolution of computer network begin in the late 50’s hit a turning point in 1969 when ARPANET(Advanced Research Projects Agency Network) connected UCLA to Stanford research institutes argumentation research center and became official in 1983 when all hosts hooked up to ARPANET were switched over to TCP/IP. While some are of the opinion that the internet began in 1969, others said its beginning was in 1983. The internet is based on a standard protocol for computers to exchange information and that standard protocol was launched in 1983. Aghaei etal (2012) noted that the World Wide Web (commonly known as the web) is not synonymous with the internet but is the most prominent part of the internet that can be defined as a system that enhances human cognition, communication, and co-operation. The World Wide Web is a system of interlinked hypertext documents (text, images, videos, and other multimedia) accessed via the Internet .To put this idea more succinctly; we can therefore say that internet is the gateway to cross before getting to the web.
Tim Berners- Lee’s view of the capabilities of the World Wide Web was expressed by three innovations, typically associated with three phases: namely, the Web of documents (Web 1.0), the Web of people (Web 2.0) and the Web of data ( still-to-be-realized Web 3.0).History has it that Tim Berners- Lee, a British computer scientist and former CERN employee, on march 12, 1989, proposed a concept to make the CERN communication system more effective, however this was discovered to have the capacity to be utilized throughout the world.  Belgian computer scientist  Cailliau proposed in 1990 to use hypertext “to link and access information of various kinds as a web of nodes in which the user can browse at will" thus this was the foundation of today’s web. However, going by the trend of constant evolution, the World Wide Web is imaging to wow the world imagination in the nearest future. This section will highlight the past present and the future of the web and its functionality or purpose. A start of the historical analyses to begin from the inceptions:
WEB 1.0
Web 1.0 was referred to as the first generation of World Wide Web which was basically defined as an information space in which the items of interest referred to as resources are identified by global identifier called Uniform Resource Identifiers (URIs). Web 1.0 was the first implementation of the web and it lasted from 1989 to 2005. It was define as web of information connections. According to the innovator Tim Berners-Lee, he considers the Web as “read-only”. Web 1.0 was characterized by one person or organization pushing content out to many people via websites or E-mail newsletter [Paul 2005]. First generation Web was era static pages and content delivery purpose only. Web 1.0 technologies include core web protocols HTML, HTTP and URI

WEB 2.0
The coming of web 2.0 was defined by Dale Dougherty in 2004 as a read-write web. The concept began with a conference brainstorming session between O’Reilly and Media live International. Alghaei et al(2012) explains that web 2.0 is not only a new version of web 1.0; but flexible web design, creative reuse, updates, collaborative content creation and modification were facilitated through web 2.0. Technologies of web 2.0 has the capacity  to  facilitates the gathering of collective intelligence, aid collaboration and management of large global crowds with common interests in social interaction .The main technologies and services of web 2.0 include; blogs, Really Simple Syndication (RSS), wikis, mashups, tags, folksonomy and tag clouds that some of them described as follows;
BLOGS
Blogs has rightly been described as user-generated web journals that offer opinions and information and that may include text, images, and links to other blogs and web pages. Some blogs are confined to personal expressions, but others make provision for reactions and comments from readers. Blogs have become an inevitable resource in online environment.However, It is put to use by different people in different way but the end use is meant to serve the mass. Historically, it was not always so, the most accurate and fitting evolution of today’s blog comes from online diaries where the diarist would keep an online journal of  themselves (Beal 2007) the first of this diary was links.net opened by Justin Hall in 1994 (Staff  2011). Though, the available literatures acknowledge Jorn Barger of the weblog Robot Wisdom for the coinage of the term weblog in December 1977 while Peter Merholz shortened the term weblog to blog in 1999. The early 2000s were a period of growth for blogs. Staff (2011) observed that there were more than 152 million blogs active by the end of 2010 and virtually, every mainstream news source has at least one blog, as do many corporations and individuals giving an instance of the regular posts on CNN.Com from meshable editors and writers. Some peripheral services to the blogosphere could be traced back to early 2000s; Technorati, the first major blog search engine was launched in 2002; Audio blogger, the first podcasting service was founded in 2003 and the first video blogs started in 2004.
RSS
The history of RSS can be traced back to 1999, when Netscape created a standard RSS version 0.90, they wanted to use an XLM format to distribute news, stories and information (RSS specification n.d), however a similar version of RSS was created by Dan libby an employee at  Userland, which created confusion in the market because it was named the same as previous. Meanwhile Rael Dornfest at O’reily related RSS version 1.0, the new specification by O’reily was incompatible with previous RSS versions, because the specifications were very different. In an attempt to minimize further confusion user land named their next release RSS version 2.0 which is very similar to 0.1 series and is generally considered compatible. Around 2003, RSS was donated to non-commercial third party, Harvard law school in order for specification to be endorsed by all.
FOLKSONOMY
Folksonomy is a system of collaboratively creating and managing tags to annotate and categorize digital content. However, Peters Isabella, defines folksonomy as the sum of this user-generated metadata of a collaborative information service. Among other things, folksonomy is a user generated taxonomy used to categorize and retrieve web content such as Web pages, photographs and Weblinks, using open-ended labels called tags. In other words, it is possible group web pages, photographs among others via tag, thanks to folksonomy. Taxonomy as used supra means a classification of organisms into groups based on similarities of structure or origin.
Vander Wal identifies two types of folksonomies. According to him there are: Broad folksonomy and Narrow folksonomy.
Broad Folksonomy: comes to play when multiple users can apply the same tag to an item, therefore promoting information about which tags are more popular. There are often the activities on Twitter and Facebook.
Narrow Folksonomy: occurs when users, in this case, few users in numbers, including the creator of the content or item as the case may be make use of tags that can each be applied only but once, Somewhat quite the opposite of the former. Here, a perfect example would be Flickr.
Also, folksonomies; both the broad and narrow types tends to make a body of information increasingly easy to search, discover, and navigate over time. Most times, using the internet can be boring especially when searching for a site or content you have once accessed. This technology helps makes it easy to search, discover and navigate through such sites rapidly easily.
Another important point to note is that folksonomy often is used online but they can arise in a number of other contexts offline as well.
Folksonomies when there are well developed are ideally accessible as a shared vocabulary that is both created by, and familiar to, its primary users, it is pivotal to note here that folksonomy tools are not part of the WWW protocols. Also, they arise where special provisions are made. Folksonomy is particularly useful when no other text is available and that makes it unique.
What is worth mentioning here is the relationship between folksonomy and semantic web. On one hand, semantic web is an evolving extension of the WWW in which content is expressed not only in a format that can be read and used by automated tools, but as natural language as well. While on the other hand, folksonomies can be used in conjunction with semantic web technologies to provide rich descriptions, but not quite yet. Note, metadata from folksonomies is not consistent or reliable.

MASHUPS
 Mashup in web development, is a web page, or web application,that aid the combination of information and services from multiple sources on the web. In essence it integrate content from more than one source into an integrated application (e.g., combining data on a topic of interest with geographical data). Mashups can be grouped into seven categories: mapping, search, mobile, messaging, sports, shopping, and movies. More than 40 percent of mashups are mapping mashups. It is easier and quicker to create mashups than to code applications from scratch in traditional ways; this capability is one of most valuable features of web 2.0. Mashups are generally created using application programming interfaces.The term "Mashup" is not formally defined by any standard-setting body. However, some crumbs can be gathered.
TAGS
 Tagging is an important feature of web 2.0 services. It is now also part of other database systems, desktop applications, and operating systems.  In information systems, a tag is a keyword or term assigned to a piece of information (such as internet bookmark, digital image, data-based record, or computer file). This kind of metadata helps describe an item and allows it to found again by browsing or searching. Tags are generally chosen informally and personally by the items creator or by its viewers, depending on the system, although they may also be chosen from a controlled vocabulary. The use of key words as part of an identification and classification system long pre-data computers. Paper data storage devices, notably edge-notched cards that permitted classification and sorting by multiple criteria were already in use prior to the twentieth century, and faceted classification has been used by Librans since the 1930s.
In the late 1970s and early 1980s, the UNIX text editors Emacs offered a companion software program called “Tags” that could automatically build a table of cross-references called a “Tags table that Emacs could use to jump between a function call and that functions definition. This use of the word “tag” did not refer to meta-data tags, but was an early use of the word “tag” in software to refer to a word index.
Online databases and early websites developed keyword tags as a way for publishers to help users find contents
 WIKIS
The history of wikis dates from 1994, when Ward Cunningham invented the concept and gave it its name and released it in 1995, in order to facilitate communication between software developers. In the meantime, the first wiki, now known as ‘’Wardswiki’’ evolved as feature to the software and as the growing body of users developed a unique ‘’wiki’’ culture.  By 2000, Wardswiki had developed a great deal of content outside of its original stated purpose, which led to the spinoff of content on to sister sites, most notably meatball wiki.
The website Wikipedia, a free content encyclopedia launched in January 2001, and quickly became the most popular wiki, which launched it remains to this day the wikiwiki.

WEB 3.0 (SEMANTIC WEB)
John Mark off of the New York Times coined the term web 3.0 in 2006, it refers to a supposed generation of internet-based services that collectively comprises what might be called the “intelligent web” such as those using semantic web, micro format, natural language search, data mining and an artificial intelligence technologies which emphasis machine facilitated understanding of information in order to provide a more productive and intuitive user experience. According to lassila and handler (2007) web 3.0 is an amalgamation of web technologies and knowledge representation which is a web structure of artificial intelligence (AI) web. WEB 3.0 was constructed as a revision of the semantic web which allows programmers and users to reference real object without importing the underlined document where the object-abstract or otherwise is described.( handler and berners-lee 2010). Web 3.0 is also known as semantic. The Semantic Web is a collaborative movement led by international standards body the World Wide Web Consortium which to them Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries“(Choudhury 2014). The main purpose of semantic web is to make the web readable by machines and not only by humans (Aghaei 2012).The Semantic Web, as originally envisioned, is a system that enables machines to “understand” and respond to complex human requests based on their meaning. Such an “understanding” requires that the relevant information sources be semantically structured. Web 3.0 is considered to have the following component identified by (Tanser, 2016) as follows:
MICROBLOGGGING: The sites consist of showing ones thought in few characters for example twitter, ilurk
VIRTUAL REALITY WORLD: Provides space visited by its users to interact with other users in a 3D platform.
CUSTOMIZATION/ PERSONALIZATION: This are features that allows users to create a unique and individual experience, example of this is Google and Amazon.
MOBILITY: Here we have mobile devices and the ability to connect to the rod through them makes possible a huge amount of applications.
ON DEMAND COLLABORATION: Users interact by supervising documents collaborating and making changes all in real times examples are Google drive and sales force.com
The explanation of web 3.0 will be incomplete without cheeping in the originator’s perspective, originally Tim Berners- Lee expressed the Semantic Web as follows; If HTML and the Web made all the online documents look like one huge book, RDF, schema, and inference languages will make all the data in the world look like one huge database .According to him the semantic web architecture as layers which are briefly described as follows:
•Unicode and URI: Unicode is used to represent any character uniquely, Whatever this character was written by any language and Uniform Resource the Identifier (URI) are unique identifiers for resources of all types.The functionality of Unicode and URI could be described as the provision of a unique identification mechanism within the language stack for the semantic web.
•Extensible Markup Language (XML):This is a technology under web 3.0 that permits the user to structure a document on the web with their desired vocabulary and it is characterized with its ability to send large documents within the web.XML is used as a base syntax for other technologies developed for the upper layers of the semantic web. XML and its related standards, such as Namespaces(NS), and schemas are used to form a common means to structure data on the web without any communication between the meanings of the data,
NS is also used to identify and distinguish different XML elements of different vocabularies. It supports mixing of different elements from various vocabularies to do a specific function. XML schema assures that the received information is according to the sent information when two applications at this level exchange information with each other.
• Resource Description Framework: (RDF) is a simple data model that uses URIs to identify web-based resources and describes relationships between the resources in terms of named properties and values. RDF is use to define, write simple statements about the meaning of terms and concepts in a form that computers can readily process (Choudhury 2014).  Generally, the RDF family supports interoperability at the semantic level. RDF developments consist of the base web language, so that agents are able to make logical inferences to perform functions based on metadata. RDF is use to define, write simple statements about the meaning of terms  and concepts in a form that computers can readily process (Choudhury 2014).
• RDF Schema: It provides modeling primitives for organizing Web objects into hierarchies and a predefined, basic type system for RDF models. Key primitives are classes and properties, subclass and sub property relationships, and domain and range restrictions, which is describes by RDF schema. RDF Schema provides a simple reasoning framework to infer types of resources.
• Ontology: Ontology can be defined as a collection of terms used to describe properties, the relation between properties, the different and a specific domain with the ability of inference.
• Logic and Proof: This layer is on top of the ontology structure to make new inferences by an automatic reasoning system. The agents are able to make deductions as to whether particular resources satisfy their requirements by using such the reasoning systems. Choudhury  (2014) observed that  Logic layer is used to enhance the ontology language further and to allow the writing of application-specific declarative knowledge while proof layer involves the actual deductive process as well as the representation of proofs in Web languages (from lower levels) and proof validation.
•Trust: The last layer of the stack addresses trust in order to provide an assurance of quality of the information on the web and a degree of confidence in the resource providing this information. Semantic web is not limited to publish data on the web; it is about making links to connect related data. Berners-Lee introduced a set of rules that have become known as the Linked data principles to publish and connect data on the web in 2007:
1. Use URIs as names for things
2. Use HTTP, URIs to look up those names
3. Provide useful information, using the standards (RDF, SPARQL) by look up a URI
4. Include links to other URIs to discover more things Data providers can add their data to a single global data space by publishing data on the web according to the Linked Data principles.
 Web 4.0
Web 4.0 is an open link and intelligent .web 4.0 offers a new model of user interaction with the most compressive and personalized not limited simply to display information but proposes to behave like an intelligent mirror that concentrate solutions to what the user needs.
There is no proper definition of web 4.0 yet because it is still an underground idea in progress. Web 4.0 can also be named as symbiotic web.
Symbiotic web means the interaction between human and machines in symbiosis. Burrus provides the following example of how the relationship between human and machines will be redefined. 
Good morning, you are flying to Boston today, take a rain coat its raining, by the way that flight you were taking , it is already canceled don’t worry about it. There was a mechanical I’ve already booked you a new one. I will tell you about it on the way to the airport, but remember you are going to exercise everyday and am here to remind you that you are to exercise.
The idea of being the symbiotic web is that once the metadata are organized, human and machines can interact in symbiosis. It will be able to think and make decision with regards to user searches and content. It will be able to give suggestions based on educated studies for how we live and what we want or need. For instance, let’s say someone wants to learn how to fix glitch crippling in a piece of technology that was just recently put on the market. It will be too early for other users to help him to do this, and the developers are not able to troubleshoot the problem because it never occurred in the testing phase of development. So theoretically the future web technology worlds allow for a computer to analyse the problem and offer you a solution. In fact, it may be able to fix the problem itself.
There are four digital disruption which will play a major role in fueling web 4.0 technology. They are;
Ballooning big data; it is estimated that by 2020, there could be four times more digital data than all the grains of sand on earth .simply there will be more colossal data than they can process also it will result in more computing solutions. Like hard copy for proceeding large data sets.
Data annalistic and business intelligence; the ease of capturing big data valve and the magnitude of its potential varies across sectors, they will have press for intelligent decision making. Annalist will need to be exactly spot on to sieve the business useful insights from the hip of clustered and unstructured data. These will form the core of data analytics and business intelligence.
Intelligence sensors; there is no need of putting genie back in the bottle. Everyone will expect to be tracked and services will be so great and again continuous monitoring will be the better business opportunities by developing sensors that act as human machine interfaces.
Digital data transfer and publishing; the 3D printing is intonate manufacturing by turning digital files into physical objects. Once you have the prototype ready in a computer assisted design.
HOW WEB TECHNOLOGIES HAS AFFECTED ONLINE JOURNALISM PRACTICE:  
The advent of technology from web 1.0 to web 4.0 has affected the practice of journalism massively and the rapid adoption of internet by journalists is however a nationwide phenomenon. The process by which computerization impacted upon the media on the 21’st century has moved on many fronts and at different speeds. Effects of the web technologies on journalism practice include but not limited to;
 A NEW REACH
It has been the development of the World Wide Web (WWW) over last fifteen years or more which has utterly transformed a publishing landscape in our era. During the era of web 1.0, it was basically read only but subsequently the advent of other web gave birth to the read write webs which allows for comments and feedback and has made the web user friendly attracting millions of users. More so, the interactive nature of the web has broken geographical boundaries giving room for online journalists to sell stories to media houses both locally and globally. The internet allows people to now share information online, regardless of where they live. Also it has encouraged pluralism of choice and a wider representation of issues. Internet has broken cultural barriers and has brought the world closer together.
A NEW SPEED
 The internet is the fastest growing medium in history. The digitalization and convergence of computer technologies which are the linchpin of the internet has greatly influenced the way news and information are produced and disseminated. Times have indeed changed, Globalization have become a buzz word and has brought with it change and competition. With the advent of web technology, news is now immediate and current and audiences are virtually transported to the scene of the event. The audience expects things at every minute and this is now like a habit in most countries even in Nigeria, people go to post, read and then know the latest any time there is crucial issue.
  NEW VOICE
 People had always felt the need to share information and hold power to account and this is indeed all that journalism is. The advent of internet has made everybody a journalist just as Gutenberg made everybody a printer with the invention of printing press as well as the advent of broadcast medium. There is now what is called citizen journalism which has given everyone the opportunity to become media creators and owners instead of passive users.wikipedia.org demonstrates that access to information and the capacity to publish are no longer the privilege of a selected few.  Due to the rapid growth of citizen journalism, there has been increase in the competition of media organizations online to the extent that television and radio companies have now moved into producing news in the written word format to enable them to survive the competition. Most times, journalists feel threatened by the increased participation in news gathering, reporting and dissemination of citizen journalists despite the approval which described such as plus to participatory democracy.
NEW DIGITAL ETHICS
Generally, technology has improved the process of identifying stories that are news worthy. Feeds from social networking service like face book and twitter now provides a snapshot of events happening around the world from view point of first hand witnesses and it seems blogs and citizen news sources offer  analytic perspective from the ground faster than print and television can provide. The growth of easy digital publishing technology brings with it new ethical dilemmas for journalist. The use internet helps journalists to get tips from which further investigation can be carried out.
LINKED DATA
This is area where there has been technological innovation impact on journalism i.e. for few journalists who have embraced the innovation. It is a movement to make the web more semantic; taking us from a collection of hyperlinked documents to hyperlinked data and facts. Some domain areas like music the principle is becoming well established and media companies are already making use of it ( guardian news and media 2017).

 A NEW ACCOUNTABILITY
Most journalists write some baseless piece that has no atom of fact, and most times they don't research on particular news before publishing. Due to the fact that online environment is open to all and is not censored, journalists have become more conscious of what they post to avoid been arrested and jailed by the government. With hyperlinks, there is instantaneous access to in-depth information online. This is an aspect of storytelling that news companies tend to ignore.
 ENTERPRENURAL JOURNALISM
Entrepreneurial journalism describes a field of media where journalism is the underlying discipline upon which to create content based businesses and services that can make money. The advent of web technology has changed the whole concept of journalism from the popular view of journalism (www, rohitbhargava.com). Due to the fact that media owners have little interest in independent reporting because they exploit their newspapers and broadcasting channels for political or economic interest, journalists have been called to be self-employed. However, journalists now channel their article or writing with the aim of making profits but this usually depends on the journalist’s skills and abilities. The reporters here charges for their services and advertising space (Nedeljkovic et al 2014).



REDUNDANCY OF TRADITIONAL MEDIUM
When the web first came, most professional journalist in the traditional media remained skeptical of the webs value as a news source, they lamented the quality of ideas found online. With the advent of internet, a growing number of people now read their news online and this could pose as a threat to traditional media especially newspaper. Traditional media and blogs compete for the attention of the general audience and readership. This competition and the underlying convergence of content and technology imply new strategic challenges for media businesses.
However, in a time where blogging is considered journalism and thousands of websites are been built daily traditional media is becoming almost insignificant. All forms of information is been sited on the internet, anyone with a computer can obtain the latest news on Google, which is updated almost down to the minute. If online journalism is supposedly quicker, cheaper, and more convenient, then the traditional journalists cannot survive expect by just going with the flow and adapting the style of writing for an online environment.

                                                       CONCLUSION
The effect of web technologies on the practice of journalism is both negative and positive. However, the negative aspect of the web especially web 2.0 is what Andrew Keen described in “CULT OF THE AMATEUR” as the blind leading the blind i.e. “infinite monkey providing infinite information for infinite readers’’ perpetuating the circle of misinformation and ignorance.  Keen (2007:47) still pointed out that despite the contributions of citizen journalists, in bringing news and information; they do not simply have the resources to bring us reliable news. He argued that they lack not only expertise and training, but connections and access to information. However the webs effect on news reporting is considered the clearest evidence of a revolutionary technology.
           

              REFERENCES
Aghaei, Nematbakhsh and Farsani (2012).  Evolution of the World Wide Web: from web 1.0 – 4.0, International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.1, January 2012.
Boriachon and Dagouat (2007). Internet Evolution From Web 1.0 to 3.0
Clarkin, Larry; Holmes, Josh. "Enterprise Mashups: The New Face of Your SOA"http://soa.sys-con.com/: SOA WORLD MAGAZINE. Retrieved 21 June 2017.
Dr. mike Evans “the evolution of the web from1.0-web 4.0”
Fichter Darlene, What Is a Mashup? http://books.infotoday.com/books/Engard/Engard-Sample-Chapter.pdf (accessed 21 June 2017)
Niemanreports.org 2017 guardian news and media limited.
Nupur Choudhury (2014) World Wide Web and Its Journey from Web 1.0 to Web 4.0  / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 5 (6) , 2014.
SunilkumarPeenikal (2009). "Mashups and the enterprise" (PDF). MphasiS - HP.
The guardian.com Https://From mattew sparkes. Wordpress (impact of technology on journalism)
The cult of the amateur by Andrew Keen.https://www.degruyter.com/view/product/42362.
What might web4.0 look like and should you be preparing? http:// www.imedia connection .com/content/34206.asp
Semantics_web_2008_free_report 11 web 2008- http://rewrite.com/2008/01
Web 4.0 era is upon us-http://www.pcword.com/article/143110/article.html
Webdesignerdepot staff(2011) a brief history of blogging.retrieved on 161 2017 from https://www.webdesignerdepot.com/2011/03/ a brief  history of blogging/


Comments