Categories
Charity Environment Search Engines

Ecosia 🌍

What is Ecosia?

Simply put, Ecosia is a search engine that plants trees with its profits.

💻📱 👉 💷💲 👉 🌱🌳

Which Search Engine Does Ecosia Use?

Ecosia is an organisation and search engine in its own right, but its results are powered by Microsoft Bing. Bing itself is carbon neutral and the whole of Microsoft are looking to go green by committing to be carbon negative by 2030.

How Green is Ecosia?

Ecosia recognise the impact the internet has on the environment of our planet. Ecosia runs on renewable energy, meaning your searches aren’t negatively impacting the planet.

“If the internet were a country it would rank #3 in the world in terms of electricity consumption” – Ecosia, 2018

In fact, searching with Ecosia is actually positively impacting the planet, with each search removing CO2 from the atmosphere. How? Because they plant trees with their profits.

As mentioned above, Bing (which powers Ecosia) is carbon neutral, so searching using Ecosia is a win-win from the perspective of your carbon footprint 👣

How Does Ecosia Make Money?

Like Google, Ecosia don’t make money from search results, they make their revenue from the ads that sit alongside the results.

Every time you click on an advert on Ecosia, you contribute to their revenue, which ultimately leads to trees being planted somewhere around the world

Ecosia tree tracker

They have a helpful counter on their search results to show you how many trees you’ve personally contributed towards.

So far they have planted over 100 million trees worldwide, supporting projects in 15 countries.

Why am I Promoting Ecosia?

The reason I wrote this article is because I think Ecosia awesome. They’re an organisation trying really hard to do the right thing and they’re clearly having an impact.

Congrats Ecosia on your success and thank you for what you’re doing for the world 🙏🎉🎊

Ecosia.org 🌍 give it a go 😊

Categories
Computers Internet News Search Engines Software Technology

What is Shodan?

EDITOR NOTE: This is Jonny’s 75th post on Technology Bloggers! Jonny was a complete newbie to blogging when he wrote his first post (about prosthetic limbs) but he is now somewhat of an expert – although he probably wouldn’t agree! – note by Christopher

Recently a couple of articles have appeared on large US websites about a type of search engine called Shodan. This search engine has been about for about 3 years, but it is different from Google and its cohorts in many ways. I looked at it and could not understand it at all, so what is it then and why is it causing such concern?

A screenshot of the Shodan website
Expose online devices

I have seen Shodan described as “The scariest search engine on the Internet”. This CNN money article explains that Shodan navigates the Internet’s back channels. It’s a kind of “dark” Google, looking for the servers, webcams, printers, routers and all the other stuff that is connected to and makes up the Internet.

What interest could there be in such capability? Well a lot apparently. The system allows an individual to find security cameras, cooling systems and all types of home control systems that we have connected to the Internet. (See Christopher’s series about his British Gas system here).

One serious problem is that many of these systems have little or no security because they are not perceived as threatened. Shodan searchers have however found control systems for a water park, a gas station, a hotel wine cooler and a crematorium. Cybersecurity researchers have even located command and control systems for nuclear power plants and a particle-accelerating cyclotron by using Shodan.

Hacking apart it turns out that the world is full of systems that are attached via router to the office computer and web server, and on to the outside world. Access for anyone who can find them and might like to turn of the refrigeration at the local ice rink, shut down a city’s traffic lights or just turn off a hydroelectric plant.

The Shodan system was designed to help police forces and others who might have legitimate need for such a tool, but what when it gets into the wrong hands. Security is non existent, just get your free account and do a few searches and see what you find.

See this Tech News World article for a further look at the ethical and practical issues that such a freely available product might bring

Regular readers will be aware of my interest in these types of problems through my work at the Bassetti Foundation for Responsible Innovation. I am not sure how the development and marketing of such a tool could be seen as responsible behaviour, but as I have been told on many occasions during interviews there are plenty of other ways of finding out such things. These types of systems are gathering already available information to make it usable, nothing more, so not doing anything wrong.

Do you agree?

Categories
Internet

Is there really that much diversity on the internet?

The internet is big right? Okay it is massive. With that massiveness one naturally associates extreme diversity. Don’t get me wrong, across the entire internet, there is amazing variation, with billions of people adding their spin to the net.

What I am going to investigate in this post though is how diverse the ‘main’ internet is. What I mean by that is the internet that we use every day. How diverse is the most regularly used/visited content? Is there really as much choice as we think, or is the majority of the internet dominated by a few firms?


In order to go about this research I am going to use Alexa, who gather statistics on websites traffic. For most sites, the data isn’t that accurate, however for really busy sites, the numbers are so great, the reliability of the data is much higher, hence why I can use it.

Alexa's Logo

Google

According to Alexa, Google.com is the most visited site on the web. How could it not be? Alexa estimates that 50% of all internet users visited Google.com in the last three months. Second on the list for most visited sites is Facebook, which is trailing with just 45% of internet users visiting the site.

Remember however that is just Google.com, Google has a massive monopoly over the internet. In the 100 most visited sites on the web, 18 of the sites are owned by Google – 16 localised sites, Google.com and GoogleUserContent.com (the site you see when there is an error finding/displaying a page).

Google undoubtedly has reduced diversity on the internet, having such a monopoly on the sites we all visit. The thing is, it isn’t just 18 sites. Google also owns YouTube and (the third most visited site on the net) Blogspot which is ranked 10th, Blogger at 47 (Blogger and Blogspot are now one) and Blogspot.in (India) ranked 73. That means 21 of the most visited sites on the net belong to Google, meaning it owns more than one fifth of the ‘main’ internet.

Googlite Logo
Google’s dominance on the web suggests that a lot of us are Googlites!

Can you call the internet diverse, when in the top one hundred sites, one firm owns more than a fifth of all sites? Maybe, what does the rest of the field look like?

Microsoft

Unsurprisingly, the company that is arguable Google’s main rival is in second place. Yahoo and Microsoft are currently in a ‘Search Alliance’ therefore restricting competition, so I am going to count them in the list of sites that Microsoft owns/influences. Here is the list of sites that Microsoft owns/influences which are top 100 websites:

  • Yahoo.com – Ranked 3rd
  • Live.com – Ranked 7th
  • Yahoo.co.jp – Ranked 16th
  • MSN.com – Ranked 17th
  • Bing.com – Ranked 29th
  • Microsoft.com – Ranked 30th – ironic how it is lower many of the other sites it owns!
  • Flickr.com – Ranked 53rd and Yahoo owned

Therefore Microsoft own/influence 7 of the top 100 sites. Add that to Google’s 21, and 28 of the top sites on the net are owned by two firms. More than a quarter.

I am starting to think the ‘main’ internet is not as diverse as one may first assume.

Amazon

Next on the list of internet giants comes Amazon. Amazon.com is ranked 10th, whilst Amazon Germany (Amazon.de) is ranked 91st and Amazon Japan (Amazon.co.jp) is 95th. Amazon also owns the Internet Movie Database (IMDB.com) which is the 50th most visited site. Amazon owns 4 of the top 100 sites.

Amazon's Logo32 sites gone.

Alibaba Group

The Alibaba Group is a privately owned Chinese business, which owns Alibaba.com, Tmall (tmall.com), Taobao (Taobao.com) and Sogou.com. The group therefore account for four of the sites that make up what I am calling the ‘main internet’.

36 sites taken by just 4 companies. How diverse is our internet?

eBay

Next we come to eBay.com which sits 23rd on the list of top 100 sites. eBay International AG (ebay.de) is in 80th place, followed by eBay UK (ebay.co.uk) in 86th. eBay also owns PayPal (paypal.com) which is ranked 46th.

eBay steals another 4 sites, leaving just 60 of our hundred left, and so far only 5 firms are involved.

Time Warner

CNN (cnn.com) AOL (aol.co.uk) and The Huffington Post (huffingtonpost.com) are all sites owned by Time Warner. Time Warner is the sixth business involved now, leaving just 57 sites.

WordPress

The blogging platform WordPress (wordpress.com) is ranked 19th, and its brother, which allows users to host the content management system on their own site (wordpress.org) is ranked 83rd.

The Official WordPress LogoThere goes another two sites, meaning just 55 left, and only seven players so far.

Twitter

Ranked number 8 on the list is Twitter, however its URL shortener (t.co) is ranked 31st, meaning Twitter is also one of the big players in the top 100 sites, arguably with some form of domination over the internet.

Twitter's Logo47 sites of the top 100 accounted for and a mere eight organisations involved.

The Rest

Of the final 53 sites, 5 are adult only sites leaving 48 sites – although many of these either are a part of, or are a much bigger group.

Some familiar faces appear in the other 48 sites, Facebook (2nd), Wikipedia (6th), LinkedIn (11th), Apple (34th), Tumblr (37th),  Pinterest (47th), BBC Online (48th), Ask (54th), AVG (62nd),  Adobe Systems Incorporated (67th), About.com (81st), ESPN (82nd),  Go Daddy (85th), Netflix (89th),  The Pirate Bay (92nd) and CNET (97th).

Remove these very well known, well established, and massive brands, and we are left with 32 sites – less than a third. Of the remaining sites, around half are Chinese, showing the growing influence and usage of the internet in China.

My Verdict

In this post I have established that of the sites we visit most regularly, 47 are owned by just eight organisations. Does that really represent the freedom that we all believe the internet offers?

I was surprised by the type of content, and the limited number of different sites that there are in the global top 100. It would seem that the most visited sites consist of search engines, social media sites and news websites. Interesting statistics.

So, what is your verdict on how diverse the internet we use everyday is? I personally am not quite as convinced as I was before writing this article that the internet is quite as free and diverse as we all believe.

Please note these rankings are changing all the time, and all content was correct according to Alexa.com at the time of writing – the 6th of July 2012.

Categories
Blogging

Good blogging practice – publishing reliable information

The web is massive bank of data, which is far too big to be regulated. Because the web can’t be regulated, it is very easy for false information to spread – fast.

If you are a blogger, it is really important that you publish information which is reliable and trustworthy. Don’t copy what the crowd says unless you know they are right, as this is not only misleading to your readers, but can also see you get penalties dished out from search engines. If you get a reputation for publishing unreliable content, the likelyhood is that your readership will fall.

When you publish something that you have found out elsewhere, you need to make sure that it is accurate and reliable, before you publish it.

How to Mythbust Rumours

When you find information, on the web, in order to ensure that it is reliable, it is always a good idea to check that it appears elsewhere. A general rule of thumb is to check that what you are reading is the same on 3 other sites, one of which is a highly reputable site.

So what is a reputable website?

Government Websites

There are a few way so to identify if a site is reputable or not. One way is to see if it is a government website. Any site which is government run is likely to be very reputable. Government websites usually end in their own unique domain name extension. If you live in the USA, government sites end in .gov or .fed.us, in the UK .gov.uk, in France .gouv.fr, .gc.ca for Canada, India’s extension is .gov.in and the list goes on.

Major News Corporations

Government sites won’t always report things that you want to verify though, so there are other ways to tell a reputable sites. Big news websites like BBC.co.uk/News and Guardian.co.uk will usually only publish information that is factual and accurate, so you can usually trust them.

The Guardian's logoThe information they publish is likely to be accurate, however it may not be impartial, so that is something to watch out for. Often news firms will take a political side, and therefore report news in a certain way – and may only publish part of a story.

High PageRank Sites

Google PageRank is calculated largely by the number of backlinks a page or site has. If a website has a very high PageRank (6+) then it is likely that it has a lot of other sites linking to it, most probably because it publishes a lot of high quality content, which people find useful and therefore link back to. High PageRank sites aren’t always trustworthy, but the higher up the spectrum of PageRank you go, the less likely it is that a site is going to be providing false information.

If a website is a PageRank 8, 9 0r 10, unless they have manipulated Google’s algorithm (through black hat SEO, which will only work for a short while, before Google catches them) then the site is likely to be extremely reliable and reputable, therefore you should be able to trust the information, data and facts that they produce.

1,000,000 to 1

If 1 highly reputable site is saying one thing, but 1 million other (not reputable) sites are saying another another, then the chances are that the 1,000,000 sites are just recycling the same false information, creating a massive bank of false information. This is one reason why you should be really careful who you trust on the web, and also make sure that you verify information with at least one reputable site. Be careful who you trust.

Academic Research

Verifying information with at least 3 sources, one of which is reputable is something which is also advised in academic research. Therefore if you use the same standards on your blog, you can’t go wrong! Search engines and readers alike will respect you for providing good quality, highly reputable content.

Technology Bloggers Policy

Every time I write an article and quote information/statistics etc. I always try to follow the 3 and 1 rule: check the information appears on 3 other sites, at least one of which is ‘reputable’. This means that everything I write should be reputable.

The post guidelines ask all writers to ensure they use the 3 and 1 rule, however we cannot guarantee that all writers do. In our Privacy Policy, we state how we try to ensure all content is true and factual, however it is always advisable to independently verify information for yourself.

Do You Verify Your Content?

Do you always try to ensure that you use the 3 and 1 rule when publishing information? That not only applies to blog posts, but also to comments. If not what measures do you use, or don’t you think it really matters?

Categories
Business

How important is the quality of hosting to online retailers?

This is a sponsored post. To find out more about sponsored content on Technology Bloggers, please visit our Privacy Policy.

With many brick-and-mortar businesses adding an online version of their high-street store to their portfolio, it’s important that firms choose the right web hosting service. With myriad services offering cheap deals, firms ought to be wary regarding offers that appear to be too good to be true – because they usually are.

On the surface, purchasing web hosting that costs ÂŁ20 a month seems like a steal. In fact, it is a steal. However the only thing that is being nicked is precious uptime for online retailers, as the vast majority of cheap hosts go hand-in-hand with downtime.

Downtime – a retailer’s worst nightmare

For online retailers, downtime is especially important; every second of downtime is potentially a lost sale. Would you rather pay a premium for quality web hosting that is reliable and constantly up, or pay a third of the price for web hosting that keeps going down? In the long-term, it may cost more for firms to pay for cheap, but less reliable hosting.

In addition, utilising the services of a web hosting service in your time zone could be important, especially for smaller firms. Imagine if your store goes down but your hosting service is half-way across the world. This is certainly not ideal for any stores looking to make sales. For example, let’s just say your UK-based store goes down at lunchtime. No amount of calls at 12pm is going to wake a firm located half-way across the world; tucked up in bed at 12am. It’s a nightmare scenario.

SEO

The importance of SEO over the last 10 years has changed the face of the internet. An increasing number of online retailers are producing fresh content in a bid to become a publishing authority in the eyes of search engines.

However, when a cheap hosting company offers dead links, 404 errors and other harmful downtime to a retailer, what are these search engines going to think? Bounced traffic isn’t going to look good in the eyes of Google or Bing.

Technology Bloggers 404 error
Technology Bloggers 404 Page

Eventually, a retailer could slip down the rankings, and get flanked by its competition. It takes a lot of dedication to work up the search rankings, so don’t let a bogus hosting firm ruin your company and its prospects.

Security

In addition, security should be a top priority for online retailers. The amount of hackers roaming cyberspace is vast and make no mistake – they’re ready to capitalise on unprotected websites. By opting for secure web hosting which features STFP and SSL, a business and its clients can feel assured that all sensitive data is kept in safe hands.

As you can see, the quality of web hosting is an absolutely integral part of the foundations of success for online retailers. In an era of cost cutting and tight purse strings, it might be tempting to lump with a cheap web hosting service from the other side of the world, but in the long-term you may end up opening your wallet more often than you think.

Categories
Computers Internet Search Engines

How to proceed in the age of big data?

A couple of weeks ago I read an article in the New York Times about the age of big data, and today at a science and technology conference I got into a conversation about the same thing with a US public health official.

Much has been written (and I am a guilty party) about Google’s quest for information, including allegations of infringements of privacy etc, but not all of this capability should be seen in a negative light. I would like to give you a few examples of why.

A wealth of data

Google collect all of the search terms used by every user and categorize them. Let’s take a hypothetical situation. You are director of a large hospital inManchester. What can Google tell you about your job? Well probably a lot, let’s say that this week there is an enormous peak in the search terms “Flu symptoms” used across the Greater Manchester area, or “rash on back and neck”. Indirectly the knowledge of these search trends tells you that you should prepare your hospital, because late next week you will have a massive influx of patients with the Flu or some other contagious disease as it takes hold of the population.

This information is potentially lifesaving, as one of the main problems with epidemics is they come out of nowhere and so health centres are not properly prepared.

Search terms can also give an indication of how the housing market will behave too, with a rise in searches for houses in a certain area being reflected 6 months later in new sales. The type of house searched could also improve planning, as developers would see what people were looking for and where.

Analysts and programmers are currently working on how to expand on the simple examples above using search terms as wider indicators, a system called ‘sentiment analysis’ looks particularly promising.

This form of analysis looks at terms used during on line communication and categorizes them in terms of their sentiments. The logic is that in an area that is prospering terms will be generally positive, but in an area that is threatened by demise, such as the closure of industry or other societal problems, the terms will differ. This is not dissimilar to the conversation analysis sociologists use to obtain a person’s own sentiments about their position in life, with their true feelings reflected in the terms they use without thought. The hope is that an accurate analysis of this type might signal unfolding problems before they become a reality so that action can be taken in specific areas to avoid social breakdown.

I have addressed these issues in more depth on the Bassetti Foundation website, but want to conclude by saying the following; in my posts I have often raised the issue of data collection as a problem, and collection of personal data for advertising or any other purpose for that matter does raise serious ethical issues, but here Google et al could be sitting on a mine of extremely useful and possibly globally important data if the technology and political will is developed to use it correctly.

Categories
How To Guides Search Engines SEO

Tips to improve your mobile SEO

Mobile search engines contain different algorithms and bots than used for traditional web searches. They evaluate websites as it is being rendered on a mobile phone. The ranks are computed based on how well the page is rendered for the phone that submits the queries. One thing you can do to improve your mobile SEO is to make verify the user agents, and the mobile crawlers can pick up your content.

Mobile phone search engines are not as finely tuned as traditional search engines. They are still placing tons of weight on bounce rates and using mobile visitors as barometers for how websites renders on phones.

One neat suggestion in improving your mobile search results is to follow traditional SEO strategies. Mobile indexes and bots determine different from web search. The differences entail things such as alt tags; heading tags and title tags are still dominant with mobile SEO.

After performing traditional SEO strategies, it proves necessary to create a secondary mobile sheet from your website. This will allow for formatting of existing pages to be viewed on mobile phones without having to create separate content. It gives you strength with the SEO value that you have already performed on your website minus creating new pages. You can use the mobile style sheet to assist in blocking things from being rendered with using a “display: none” on the style sheet. All mobile phones with the exception of iPhones can automatically pull the “handheld” style sheet.

iPhones determine different with not searching for mobile “handheld” style sheets. In addressing this critical problem, ensure that you copy your handheld sheet, and create on that is geared for the iPhone. The iPhone is meant to render entire website pages, and people statistically still prefer mobile-formatted content on their iPhones.

What Google search results look like on a smartphoneSometimes, mobile search engines will rank traditional pages but consider them ill-suited for rending on mobile phones even with mobile-specific style sheets. When this happens, the mobile search engine will rank the traditional content but “transcode” it for viewing on mobile phones.

Transcoded versions of websites are hosted on temporary subdomains on search engine’s domains. Typically, this provides a user experience that proves under-optimized. This is because navigation sometimes is broken or misplaced and the individual pages are separated into different pages for faster downloading. This can prove problematic when it comes to tracking activity on your mobile website, and if someone links to your content, the website might not receive credit for the links. Address this problem with a “no-transform”header of your content. The no-transform in the cache-control should stop transcoding.

Next, you should include a mobile-site map. Google provides tools that can help you in building one of these. For website owners using multiple-markup languages such as WML (Wireless Markup Language) or XHTML, you should submit separate mobile sitemaps per languages being used on the website. Ensure that you link to mobile site maps in your robots.txt file, the same as you would for traditional site maps.

When you are submitting a mobile site map, add the mobile style sheet and the no-transform tag for this should confirm fitting in getting the mobile search engines to rank your content.  Another excellent tip is to make sure your traditional content will work on mobile phones. This will provide the best chance of faring well with higher numbers of browsers and phones.

If your content on your website does not include external style sheets, or contains sloppy code or too many media files, the content will have problems rending on mobile phones. You might want to make mobile-specific content on a mobile sub-directory or sub-domain. This can generate tons of problems for SEO strategists because it can end-up splitting traffic and links between two sets of similar pages.

You should use a “handheld” style sheet with the no-transform designation. You can also re-arrange code so that it proves more suited for crawling and rendering. Redirection and browser-detection and self-selection are how websites and mobile phones interact with one another. Browser detection and re-direction is the process that appears to see what browser the website visitor is using to access your website. If the mobile browser is requesting the traditional website, a single PHP script can redirect the user to the mobile phone website. If a browser is requesting the mobile website, it can redirect them to the traditional website. This proves helpful if your website out-ranks your mobile website in mobile searches.

When you think of mobile SEO, the act alone proves dangerous to create a duplicate copy of your website and placing it in a sub-domain. Most website owners think that mobile phones are capable of interpreting the duplication, but unbelievably, they can become confused. When confusion occurs, your new mobile content has a very-little chance of outranking your traditional website in the mobile searches. Redirection and browser-detection should take care of these issues, but there is always a chance of duplicate content taking away value from the content located on the main website.

If this happens, you can try using a canonical tag that will promote the value from your mobile website back to your main website. You can then rely on your browser-detection and re-direction to take care of it. What proves dangerous in this scenario is that you might hurt your rankings for searches on the primitive mobile phones. The reason is that you are pushing the total SEO value into non-mobile content.

Categories
Computers Internet News Science Search Engines

Search engines are changing the way our memory works

A recent article in Science Mag suggests that the use of computers and the internet might actually be changing the way our memory works.

A series of psychology experiments recently carried out have shown that sometimes, when people were presented with hard to answer questions, they began to think of computers.

If participants believed that it would be easy to find answers on Google later, then they had poorer recall of the actual answer, and yet a greater memory of where the answer was stored.

A head x-ray showing someone with a computer for a brainThe researchers said that the internet acts as a tool which we now depend upon to to aid our memories, by remembering some data for us.

Here is the abstract for the journal entry

The advent of the Internet, with sophisticated algorithmic search engines, has made accessing information as easy as lifting a finger. No longer do we have to make costly efforts to find the things we want. We can “Google” the old classmate, find articles online, or look up the actor who was on the tip of our tongue. The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it. The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves.

In more simplified English, what this is basically saying is that it is now much easier to access data online, mainly thanks to search engines like Google, Bing and Yahoo. If we have a question, we can find the answer in seconds.


This has lead the the human brain associating the thought of a problem with computers, as it believes that the internet will be the source of the solution. Search engines are now embeded so much in our brain, that when we think of a problem, we no longer bother trying to work out the answer for ourselves, but instead we associate the possibility of finding the solution of the problem with a search engine.

Let’s be honest, who hasn’t been bugged by something, asked someone else who also wasn’t able to help and as a result was either told “Google it” or thought “I could Google that”? I have, in fact I would say it happens on a weekly basis!

Question time

So what do you think? Are computers, the internet and search engines making us stupid, or is it just that we are now adapting as a race to more efficient ways of finding out information?

Categories
Computers Internet News Science Search Engines

Search engines are changing the way our memory works

A recent article in Science Mag suggests that the use of computers and the internet might actually be changing the way our memory works.

A series of psychology experiments recently carried out have shown that sometimes, when people were presented with hard to answer questions, they began to think of computers.

If participants believed that it would be easy to find answers on Google later, then they had poorer recall of the actual answer, and yet a greater memory of where the answer was stored.

A head x-ray showing someone with a computer for a brainThe researchers said that the internet acts as a tool which we now depend upon to to aid our memories, by remembering some data for us.

Here is the abstract for the journal entry

The advent of the Internet, with sophisticated algorithmic search engines, has made accessing information as easy as lifting a finger. No longer do we have to make costly efforts to find the things we want. We can “Google” the old classmate, find articles online, or look up the actor who was on the tip of our tongue. The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it. The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves.

In more simplified English, what this is basically saying is that it is now much easier to access data online, mainly thanks to search engines like Google, Bing and Yahoo. If we have a question, we can find the answer in seconds.


This has lead the the human brain associating the thought of a problem with computers, as it believes that the internet will be the source of the solution. Search engines are now embeded so much in our brain, that when we think of a problem, we no longer bother trying to work out the answer for ourselves, but instead we associate the possibility of finding the solution of the problem with a search engine.

Let’s be honest, who hasn’t been bugged by something, asked someone else who also wasn’t able to help and as a result was either told “Google it” or thought “I could Google that”? I have, in fact I would say it happens on a weekly basis!

Question time

So what do you think? Are computers, the internet and search engines making us stupid, or is it just that we are now adapting as a race to more efficient ways of finding out information?