Facebook Twiter Goole Plus Linked In YouTube Blogger

Human Search Engine


"The collector is the true resident of the interior.....The collector dreams his way not only into a distant or bygone world,
but also into a better one".. - Walter Benjamin

Quest is a difficult journey towards a goal, often symbolic, abstract in idea or metaphor. An Adventure.

Previous SubjectNext Subject

What is a Human Search Engine?

Mouse and Pointer Person or persons searching multiple search engines, using multiple keywords and phrases, searching multiple websites for links and information. Also using other media such as TV, Movies, Documentaries, Radio, Magazines, Newspapers, Advertisers and recommendations from other people. Doing all this to find relevant information and websites pertaining to a particular subject, with this subject being Human Intelligence, and the never ending process of finding ways to improve education and how the public is informed about the realities of our lives and our current situation.

Web Crawler is a meta-search engine that blends the top search results.

Filtering - Defragging

Man Searching with Binoculars Internet Miner is the application of data mining techniques to discover patterns from the World Wide Web. Web mining can be divided into three different types – Web usage mining, Web content mining and Web structure mining.

Search Engine Types

Web Robot is a software application that runs automated tasks (scripts) over the Internet.

Aggregate is to form and gather separate units into a mass or whole.

Search Technology

I am a Human Search Engine, but it's much more then that...

Archivist is an information professional who assesses, collects, organizes, preserves, maintains control over, and provides access to records and archives determined to have long-term value.

I'm an Internet Miner exploring the World Wide Web. An Archivist of Information and Knowledge. Extracting and aggregating the best Information and websites that the Internet and the world has to offer. An Information Architect Filtering and Organizing the Internet One Website at a time. I'm a Knowledge Moderator, an Internet Scribe, I'm an Autonomous Intelligent Agent, like Ai. But it's more then that...I'm a Maven, an accumulator of knowledge who seeks to pass knowledge on to others. A Web Portal is a specially designed web site that brings information together from diverse sources in a uniform way. Extract, Transform, Load is a process in database usage and especially in data warehousing that Extracts data from homogeneous or heterogeneous data sources. Transforms the data for storing it in the proper format or structure for the purposes of querying and analysis. Loads it into the final target, (database, more specifically, operational data store, data mart, or data warehouse). Lowering the Entropy of the System since 2008. 

Welcome to my Journey in Hyperlink Heaven. Over 18 years of internet searches that are organized, categorized and contextualized. A Researchers Dream. I have already clicked my mouse over a million times, and I've only just begun. I have tracked over 90% of my online activities since 1998, so my Digital Trail is a long one. This is my story about one mans journey through the Internet. What if you shared everything you learned? Did you ever wonder? 

To put it simply, "I'm Organizing the Internet". Over the last 18 years since 1998, I have been surfing the world wide web, or Trail Blazing the internet, and Curating my experience. I've asked the internet well over 500,000 Questions so far. And from those questions I have gathered a lot of Information, Knowledge and Resources. So I then organized this Information, Knowledge and Resources into categories. I then published it on my website so that the Information, Knowledge and Resources can be shared and used for educational purposes. I also share what I've personally learned from this incredible endless journey that I have taken through the internet. The internet is like the universe, I'm not over whelmed by the Size of the Internet, I'm just amazed from all the things that I have learned, and wondering just how much more will I be able to understand. Does knowledge and information have a limit? Well lets find out. Adventure for me has always been about discovering limits, this is just another Adventure. I'm an internet surfer who has been riding the perfect wave for over 12 years. But this is nothing new. In the early 1900's, Paul Otlet pursued his quest to organize the world’s information.

Vannevar Bush envisioned the internet before modern computers were being used. 

Feltron - Mundaneum
Organizing Wiki Pages
Art and Science of Curation

World Brain is a World Encyclopaedia that could help world citizens make the best use of universal information resources and make the best contribution to world peace.

Ontology is a Knowledge domain that is usually hierarchical and contains all the relevant entities and their relations.

Ontology (information science) types, properties, and interrelationships of the entities that really or fundamentally exist for a
particular domain of discourse.

Human Search Engine

A human search engine is not manipulated by money or manipulated by defective and ineffective algorithms. A human search engine is created by humans, for humans. We don't have everything, but who needs everything? People want what's important. People want the most valuable knowledge and information that is available, without stupid adds, and without any ignorant manipulation or Censorship. People want a trusted source for information, a source that cares about people more then money. A search engine indexed by human eyes using Ai Machines.  

I'm a Pilgrim on a Pilgrimage. An Internet Pathfinder whose task it is to carry out Daily Internet Reconnaissance Missions and document my findings. No, I'm not an Internet Guru or a Gatekeeper but I have created an excellent Internet Source.
Our physical journeys in the world are just as important as our Mental Explorations in the Mind, the Discoveries are Endless.
These days I seem to be leaving more Digital Footprints then actual footprints. Which one is more meaningful?

Pioneer
is to open up and explore a new area. Take the lead or initiative in. Participate in the development of.
Leading the way. Trailblazing. To initiate or participate in the development of.

Glean is to extract (information) from various sources. Gather, as of natural products. Accumulate resources.

I'm more of a Knowledge Organizer and Knowledge Sharer then a Knowledge Keeper

I am just a bee in the hive of Knowledge, doing my part to keep the hive productive. 

Beehive is an enclosed structure in which some honey bee species of the subgenus Apis live and raise their young.

Knowledge Hive
Knowledge Hives
The Hive FCV
The Hive Knowledge Platform (youtube)

"I wouldn't say that I'm a wisdom keeper, I more of a wisdom sharer, which makes everyone a wisdom beneficiary."

For every minute spent in organizing, an hour is earned.”

I feel like a Human Conduit, a passage (a pipe or tunnel) a channel for transferring information, synchronizing information to and from various destinations.

Two Directory Projects are the work accumulated from one Human Editor - The Power of One (youtube)

Looking for Adventure.com has over 60,000 handpicked Websites. (External Links) 
LFA took 14 years to accumulate as of 2016.

Basic Knowledge 101.com has over 50,000 handpicked Websites. (External Links)  (took 8 years to accumulate as of 2016)

The Internet and Computer Digital Information combined allows a person to save the work that they have done and create a living record of information and experiences. Example ' Looking for Adventure.com ' "not a total copy of my life but getting close" Things don't have to be written in stone anymore, but it doesn't hurt to have an extra copy

When I started in 1998 I didn't know how much knowledge and information I would find, or did I know what kind of knowledge and information I would find, or did I know what kind of benefits would come from this knowledge and information. Like a miner in the olds days, you dig a little each day and see what you get. And wouldn't you know it, I hit the jackpot. The wealth of information and knowledge that there is in the world is enormous, and invaluable. But we can't celebrate just yet, we still
need to distribute our wealth of knowledge and information and give everyone access. Other wise we will never fully benefit from our wealth of knowledge and information, or we will ever fully benefit from the enormous potential that it will give us.

"I saw a huge unexplored ocean, so naturally I dove in to take a look. 8 years later in 2016, I have been exploring this endless sea of knowledge, and have come to realize that I have found a home." About my Research

Filtering

Information Filtering System is a system that removes redundant or unwanted information from an information stream using (semi) automated or computerized methods prior to presentation to a human user. Its main goal is the management of the information overload and increment of the semantic signal-to-noise ratio. To do this the user's profile is compared to some reference characteristics. These characteristics may originate from the information item (the content-based approach) or the user's social environment (the collaborative filtering approach).

Filter (signal processing) is a device or process that removes some unwanted components or features from a signal. Filtering is a class of signal processing, the defining feature of filters being the complete or partial suppression of some aspect of the signal.

Media Literacy

Abstraction is a conceptual process by which general rules and concepts are derived from the usage and classification of specific examples. Conceptual abstractions may be formed by filtering the information content of a concept or an observable phenomenon, selecting only the aspects which are relevant for a particular purpose.

Terminology Extraction is a subtask of information extraction. The goal of terminology extraction is to automatically extract relevant terms from a given corpus. Collect a vocabulary of domain-relevant terms, constituting the linguistic surface manifestation of domain concepts.

Noisy Text Analytics is a process of information extraction whose goal is to automatically extract structured or semistructured information from noisy unstructured text data.

Fragmented

Noisy Text noise can be seen as all the differences between the surface form of a coded representation of the text and the intended, correct, or original text.

Gatekeeping is the process through which information is filtered for dissemination, whether for publication, broadcasting, the Internet, or some other mode of communication.

Gatekeeper are individuals who decide whether a given message will be distributed by a mass medium. Serve in various roles including academic admissions, financial advising, and news editing. Not to be confused with mass media.

Collaborative Filtering is the process of filtering for information or patterns using techniques involving collaboration among multiple agents, viewpoints, data sources, etc. Sometimes making automatic predictions about the interests of a user by collecting preferences or taste information from many users (collaborating).

Deep Packet Inspection is a form of computer network packet filtering that examines the data part (and possibly also the header) of a packet as it passes an inspection point, searching for protocol non-compliance, viruses, spam, intrusions, or defined criteria to decide whether the packet may pass or if it needs to be routed to a different destination, or, for the purpose of collecting statistical information that functions at the Application layer of the OSI (Open Systems Interconnection model).

Focus - Attention


So how does one person create databases this large in such a short time?

The techniques and methods are quite simple.....

First: When doing internet searches, for what ever reason, you are bound to come across a website or keyword phrase that relates to your subject matter. Then you do more searches using those keywords and then save those keywords and websites to your database. This is very important because most likely you will never come across the same info related to those particular search parameters, so saving and documenting your findings is very important. Terminology Extraction
Second:
When reading, watching TV, watching a movie or even talking with someone, you are bound to come across ideas and keywords that you could use when searching for more information pertaining to your subject. Then again saving and documenting your findings is very important. It's always a good idea to have a pen and paper handy to write things down or you can use your cell phone to record a voice memo so that you don't forget your information or ideas. The main thing is to have a subject that you're interested in and at the same time being aware of what information is valuable to your subject when it finally presents itself. Combining a Human Algorithm with a Randomized Algorithm.
Third: Organizing, updating and improving your database so that it stays functional and easy to access. So my time is usually balanced between these three tasks, and yes it is time consuming. You can also use the Big 6 Techniques when gathering Information to help with your efficiency and effectiveness. I also created a Internet Searching Tips help Section for useful ideas. List of Glossaries
One last thing: if you spend a lot of time on the internet doing searches and looking for answers you are bound to come across some really useful websites and information that were not relevant to what you were originally searching for. So it's a good idea to start saving these useful websites in new categories or just save them in a appropriate named folder in your documents. This way you can share these websites with friends or just use them at some later time. It is sometimes called Creating Search Trails, which I have 18 years worth as of 2016. Not bad for a Personal Web Page.

The Deep Web consists of those pages that Google and other search engines don't index. Deep Web (wiki)

The Dark Web
is an actively hidden, often anonymous part of the deep web but it isn't inherently bad. Dark Internet (wiki)

Deep Web Exploring the Dark Internet, the part of the internet that very little people have ever seen. Memex (wiki)

How the Mysterious Dark Net is going Mainstream (video) -  Tor Project

Google has indexed 1 trillion pages so far in 2016, but that is only 5% of the total knowledge and information that we have.

Surface Web, also called the Visible Web, Clearnet, Indexed Web, Indexable Web or Lightnet. It is that portion of the World Wide Web that is readily available to the general public and searchable with standard web search engines. It is the opposite of the deep web.


Academics
 
In a way my Human Search Engine is my Dissertation. My Thesis is Basic Knowledge 101 and proving the importance of a Human Operating System in regards to having a more Comprehensive and Effective Education. This is my Tenure....My Education Knowledge Database Project...This is just the beginning of my Intellectual Works. Basic Knowledge 101.com is my Curriculum Vitae. Working on this project I went from an Undergraduate Study, through Postgraduate Education right into a Graduate Program. I started out as a Non-Degree Seeking Student but I ended up with a Master's Degree and a Doctoral Degree, well almost. I have done my fieldwork, I have acquired specialized skills, I have done advanced original ResearchMy Business Card. But I still have no name for my Advanced Academic Degree. Maybe "Internet Comprehension 101".

HyperLand (youtube)

Is anyone actually studying the Internet? In some ways they are. Internet Studies. I wonder what they're learning?
There is also Web Science, which is not the same as Web of Science or Information Science, Peer-to-Peer, Open Source
and Free Open Access. I wonder who else is studying these subjects in this particular way besides me? For now I am just a Scholar who is working on a Bachelor Degree doing some Postdoctoral Research that 90% of people can not comprehend. 
So I guess that makes me a kind of a Subject-Matter Expert on Internet Education Knowledge Management. Maybe?

Accreditation


The Information Age

We are now living in the Information Age. A time where information and knowledge is so abundant that we can no longer ignore it. But sadly, not everyone understands what information is, or do most people understand the potential of Knowledge and Information. The Information age is the greatest transition of the human race, and of our planet. The power of knowledge is just beginning to be realized. Knowledge and information gives us an incredible ability to explore ourselves, our world and our universe in ways that we have never imagined. Knowledge and information can improve the lives of every man, women and child on this planet. Knowledge and information will also help us understand the importance of all life forms on this planet like never before. This is truly the Greatest Awakening of our world.

Preserving Information
Information Economy
Knowledge Economy
Knowledge Market is a mechanism for distributing knowledge resources.
Knowledge Management
Information Literacy
Information Stations
Information Overload
 

What have I Learned about being a Human Search Engine?

I am a Semantic Web as well as a Human Search Engine. Humans will always be better then machines when it comes to associations, perceptions, Perspectives, Categorizing and Organizing, something's need to be done manually. Especially when it comes to organizing information and knowledge. Linking Data, Ontology learning, Library and information science, creating a Visual Thesaurus and Tag Clouds is what I have been doing for 10 years. " Welcome to Web 3.0." I'm an Intelligent Agent combining Logic and Fuzzy Logic, because there are just some things that machines or Artificial Intelligence cannot do or do well. Automated Reasoning Systems and Computational Logic can only do so much. So we need more Humans then computer Algorithms. Creating knowledge bases is absolutely essential. This is why I believe that having more Human Search Engines is a benefit to anyone seeking knowledge and information. (Human Based Genetic Algorithms) Structuring websites into syntax link patterns and information into categories or Taxonomies without being objective or impartial. Organizing information and websites so that visitors have an easy time finding what they're looking for (Principle of least effort), plus at the same time, showing them other things that are related to that particular subject that might also be of interest to them. (Abstraction) (Relational Model). More relevant choices and a great alternative and complement to Search Engines. But it's not easy to manage and maintain a human search engine, especially for one person. You're constantly updating the link data base, adding links, replacing links or removing some links altogether. Then on top of that there's the organizing and the adding of content, photos and video. And all the while your website grows and grows. Adding related subjects and subcategorizing information and links. Cross linking or Cross-Referencing so that related information can be found in more then one place while at the same time displaying more Connections and more Associations

Interconnectedness
Semantic Web Info


What being a Human Search Engine Represents

A Human Search Engine is more then just a Website with Hyperlinking, and it's more then just an Information Hub or a Node with Contextual Information and Structured Grouping. A Human Search Engine is also more then just Knowledge Organization is a branch of Library and Information Science (LIS) concerned with activities such as document description, indexing and classification performed in libraries, databases, archives, etc..
 
Intelligence Gathering is a method by which a country gathers information using non-governmental employees.

Internet Aggregation refers to a web site or computer software that aggregates a specific type of information from multiple online sources.

Knowledge Extraction is the creation of knowledge from structured (relational databases, XML) and unstructured (text, documents, images) sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must represent knowledge in a manner that facilitates inferencing.

Information Extraction
Database Indexing
Knowledge Base - KM - File System
Media Curation - Digital Curation
Documentation
Information Filtering System

Master Directory is a file system cataloging structure which contains references to other computer files, and possibly other directories. On many computers, directories are known as folders, or drawers to provide some relevancy to a workbench or the traditional office file cabinet.

Web Directory is a directory on the World Wide Web. A collection of data organized into categories. It specializes in linking to other web sites and categorizing those links.

Website Library - Types of Books

Web indexing refers to various methods for indexing the contents of a website or of the Internet as a whole. Individual websites or intranets may use a back-of-the-book index, while search engines usually use keywords and metadata to provide a more useful vocabulary for Internet or onsite searching. With the increase in the number of periodicals that have articles online, web indexing is also becoming important for periodical websites. Web Index

Semantic Web is an extension of the Web through standards by the World Wide Web Consortium (W3C). The standards promote common data formats and exchange protocols on the Web, most fundamentally the Resource Description Framework (RDF).


Search Engines

Organic Search Engine is a search engine that uses human participation to filter the search results and assist users in clarifying their search request. The goal is to provide users with a limited number of relevant results, as opposed to traditional search engines that often return a large number of results that may or may not be relevant.

Search Engine Technology

Organic Search is a method for entering one or a plurality of search items in a single data string into a search engine. Organic search results are listings on search engine results pages that appear because of their relevance to the search terms, as opposed to their being advertisements. In contrast, non-organic search results may include pay per click advertising.

Hybrid Search Engine is a type of computer search engine that uses different types of data with or without ontologies to produce the algorithmically generated results based on web crawling. Previous types of search engines only use text to generate their results. Hybrid search engines use a combination of both crawler-based results and directory results. More and more search engines these days are moving to a hybrid-based model.

Question and Answers Format

Search Engine (computing) is an information retrieval system designed to help find information stored on a computer system. The search results are usually presented in a list and are commonly called hits. Search engines help to minimize the time required to find information and the amount of information which must be consulted, akin to other techniques for managing information overload. The most public, visible form of a search engine is a Web search engine which searches for information on the World Wide Web.

Indirection is the ability to Reference something using a name, reference, or container instead of the value itself. The most common form of indirection is the act of manipulating a value through its memory address. For example, accessing a variable through the use of a pointer. A stored pointer that exists to provide a reference to an object by double indirection is called an
indirection node. In some older computer architectures, indirect words supported a variety of more-or-less complicated addressing modes.

Probabilistic Relevance Model is a formalism of information retrieval useful to derive ranking functions used by search engines and web search engines in order to rank matching documents according to their relevance to a given search query. It makes an estimation of the probability of finding if a document dj is relevant to a query q. This model assumes that this probability of relevance depends on the query and document representations. Furthermore, it assumes that there is a portion of all documents that is preferred by the user as the answer set for query q. Such an ideal answer set is called R and should maximize the overall probability of relevance to that user. The prediction is that documents in this set R are relevant to the query, while documents not present in the set are non-relevant.

Bayesian Network is a probabilistic graphical model (a type of statistical model) that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

Bayesian Inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".

Search Aggregator is a type of metasearch engine which gathers results from multiple search engines simultaneously, typically through RSS search results. It combines user specified search feeds (parameterized RSS feeds which return search results) to give the user the same level of control over content as a general aggregator.

Metasearch Engine or aggregator) is a search tool that uses another search engine's data to produce their own results from the Internet. Metasearch engines take input from a user and simultaneously send out queries to third party search engines for results. Sufficient data is gathered, formatted by their ranks and presented to the users.

Prospective Search is a method of searching on the Internet where the query is given first and the information for the results are then acquired. This differs from traditional, or "retrospective", search such as search engines, where the information for the results is acquired and then queried.

Subject Indexing is the act of describing or classifying a document by index terms or other symbols in order to indicate what the document is about, to summarize its content or to increase its findability. In other words, it is about identifying and describing the subject of documents. Indexes are constructed, separately, on three distinct levels: terms in a document such as a book; objects in a collection such as a library; and documents (such as books and articles) within a field of knowledge.

Search Engine Indexing collects, parses, and stores data to facilitate fast and accurate information retrieval. Index design incorporates interdisciplinary concepts from linguistics, cognitive psychology, mathematics, informatics, and computer science. An alternate name for the process in the context of search engines designed to find web pages on the Internet is web indexing.

Indexing

Text Mining also referred to as text data mining, roughly equivalent to text analytics, refers to the process of deriving high-quality information from text. High-quality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interestingness. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling (i.e., learning relations between named entities). Text analysis involves information retrieval, lexical analysis to study word frequency distributions, pattern recognition, tagging/annotation, information extraction, data mining techniques including link and association analysis, visualization, and predictive analytics. The overarching goal is, essentially, to turn text into data for analysis, via application of natural language processing (NLP) and analytical methods. A typical application is to scan a set of documents written in a natural language and either model the document set for predictive classification purposes or populate a database or search index with the information extracted.

Social Search is a behavior of retrieving and searching on a social searching engine that mainly searches user-generated content such as news, videos and images related search queries on social media like Facebook, Twitter, Instagram and Flickr. It is an enhanced version of web search that combines traditional algorithms. The idea behind social search is that instead of a machine deciding which pages should be returned for a specific query based upon an impersonal algorithm, results that are based on the human network of the searcher might be more relevant to that specific user's needs.

Interactive Person to Person Search Engine

Gimmeyit (search engine) is a crowd-source-based search engine using social media content to find relevant search results rather than the traditional rank-based search engines that rely on routine cataloging and indexing of website data. The crowd-source approach scans social media sources in real-rime to find results based on current social "buzz" rather than proprietary ranking algorithms being run against indexed sites. With a crowd source approach, no websites are indexed and no storage of website metadata is maintained.

Tagasauris - Public Data

Selection-Based Search is a search engine system in which the user invokes a search query using only the mouse. A selection-based search system allows the user to search the internet for more information about any keyword or phrase contained within a document or webpage in any software application on his desktop computer using the mouse.

Web Searching Tips

Web Portal is most often a specially designed web site that brings information together from diverse sources in a uniform way. Usually, each information source gets its dedicated area on the page for displaying information (a portlet); often, the user can configure which ones to display.

Networks - Social Networks

Router is a networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet. A data packet is typically forwarded from one router to another through the networks that constitute the internetwork until it reaches its destination node.

Interface

Computer - Internet - Web of Life

Window to the World
Open Source

A Human Search Engine also includes..
Archival Science - Archive
Knowledge Management
Library Science
Information Science

Human-Based Computation is a computer science technique in which a machine performs its function by outsourcing certain steps to humans, usually as microwork. This approach uses differences in abilities and alternative costs between humans and computer agents to achieve symbiotic human-computer interaction. In traditional computation, a human employs a computer to solve a problem; a human provides a formalized problem description and an algorithm to a computer, and receives a solution to interpret. Human-based computation frequently reverses the roles; the computer asks a person or a large group of people to solve a problem, then collects, interprets, and integrates their solutions.

Reflective Practice
Research - Science
Tracking
Interdiscipline
Thesaurus

Open to the Public

Libre Knowledge knowledge released in such a way that users are free to read, listen to, watch, or otherwise experience it; to learn from or with it; to copy, adapt and use it for any purpose; and to share the work (unchanged or modified).

Knowledge Commons refers to information, data, and content that is collectively owned and managed by a community of users, particularly over the Internet. What distinguishes a knowledge commons from a commons of shared physical resources is that digital resources are non-subtractible; that is, multiple users can access the same digital resources with no effect on their quantity or quality.

Open Science

Open Knowledge is knowledge that one is free to use, reuse, and redistribute without legal, social or technological restriction. Open knowledge is a set of principles and methodologies related to the production and distribution of knowledge works in an open manner. Knowledge is interpreted broadly to include data, content and general information.

Open Knowledge Initiative is an organization responsible for the specification of software interfaces comprising a Service Oriented Architecture (SOA) based on high level service definitions.

Open Access Publishing refers to online research outputs that are free of all restrictions on access (e.g. access tolls) and free of many restrictions on use (e.g. certain copyright and license restrictions)

Open Data is the idea that some data should be freely available to everyone to use and republish as they wish, without restrictions from copyright, patents or other mechanisms of control.

Open Content describes a creative work that others can copy or modify.

Open Source Education

Internet

A Human Search Engine is a lot of work. I have been working an average of 20 Hours a week since 1998 and over 50 Hours a week since 2006. With over a Billion Websites containing over 450 billion web pages on the World Wide Web, there's a lot of information to be organized. And with almost 2 billion people on the internet there's a lot of  minds to collaborate with.
My Human Search Engine Design Methods are always improving, but I'm definitely not a professional Website Architecture so there is always more to learn. I'm constantly Multitasking so I do make mistakes from time to time, especially with proof reading my own writing, which seems almost impossible (Writers Blindness). This is why writers and authors have proof readers and copy editors, which is something I cannot afford right now, so please excuse me for my spelling errors and poor grammar. Besides that I'm still making progress and I'm always acquiring new knowledge, which always makes these projects fascinating and never boring. The Adventures in Learning You can also look at my website as web Indexing.

Web indexing means creating indexes for individual Web sites, intranets, collections of HTML documents, or even collections of Web sites. Web-indexing.org

Indexes are systematically arranged items, such as topics or names, that serve as entry points to go directly to desired information within a larger document or set of documents. Indexes are traditionally alphabetically arranged. But they may also make use of Hierarchical Arrangements, as provided by thesauri, or they may be entirely hierarchical, as in the case of taxonomies. An index might not even be displayed, if it incorporated into a searchable database.

Indexing
is an analytic process of determining which concepts are worth indexing, what entry labels to use, and how to arrange the entries. As such, Web indexing is best done by individuals skilled in the craft of indexing, either through formal training or through self-taught reading and study.

Indexing is a list of words or phrases ('headings') and associated pointers ('locators') to where useful material relating to that heading can be found in a document or collection of documents. Examples are an index in the back matter of a book and an index that serves as a library catalog.

A Web index is often a browsable list of entries from which the user makes selections, but it may be non-displayed and searched by the user typing into a search box. A site A-Z index is a kind of Web index that resembles an alphabetical back-of-the-book style index, where the index entries are hyperlinked directly to the appropriate Web page or page section, rather than using page numbers.    
 
Interwiki Links is a facility for creating links to the many wikis on the World Wide Web. Users avoid pasting in entire URLs (as they would for regular web pages) and instead use a shorthand similar to links within the same wiki (intrawiki links).

I'm like an isle in the internet library. Organizing data out of necessity while making it a value to others at the same time. Eventually connecting to other human search engines around the world to expand its reach and capabilities.

I like to describe my website as being kind of like a lateral Blog then the usual Linear Blog because I update multiple pages at once instead of just one. As of 2010 around 120,000 new weblogs are being created worldwide each day, but of the 70 million weblogs that have been created only around 15.5 million are actually active. Though blogs and User-Generated Content are useful to some extent I feel that too much time and effort is wasted, especially if the information and knowledge that is gained from a blog is not organized and categorized in a way that readers can utilize and access these archives like they would do with newspapers. This way someone can build knowledge based evidence and facts to use against corruption and incompetence. This would probably take a Central Location for all the blogs to submit too. This way useful knowledge and information is not lost in a sea of confusion. This is one of the reasons why this websites information and links will continue to be organized and updated so that the website continues to improve.  

"Links in a Chain"

"There's a lot you don't know, welcome to web 3.0" This is not just my version of the internet, this is my vision of the internet. And this is not philosophy, it's just the best idea that I have so far until I can find something better to add to it, or replace it, or change it. A Think Tank who's only major influence is Logic.


"When an old man dies, it's like entire library burning down to the ground. But not for me, I'll just back it up on the internet."


Internet Searching Tips 

"Knowing how to ask a question and knowing how to analyze the answers"

If on a website and you're using the Firefox browser, if you right click on the page, and then click on "Save Page As", it will save the entire page on your computer so that can be view that page when you are off line, without the need of an internet connection.

When searching the internet you have to use more then one search engine in order to do a complete search. Using one search engine will narrow your findings and possibly keep you from finding what you're looking for because most search engines are not perfect and are sometimes unorganized, flawed and manipulated. This is why I'm organizing the Internet because search engines are flawed and thus cannot be fully depended on for accuracy.  Adaptive Search

Example: Using the same exact keywords on 4 different search engines I found the website that I was looking for at the top in the number one position, on 2 of the 4 search engines, and I could not find that same website on the other search engine unless I searched several pages deep. So one search engine is flawed or manipulated and the other search engine is not. There are chances that the webpage you are looking for is not titled correctly so you may have to use different keywords or phrases in order to find it. But even then this is no guarantee because search engines also use other factors when calculating the results for particular words or phrases. And what all those other factors are and how they work is not exactly clear.

Search engines are in fact a highly important Social Service, just like a Congressman or President, except not corrupted of course. If you honestly can not say exactly how and why you performed a particular action, then how the hell are people supposed to believe you or understand what they need to do in order to fix your mistake or at least confirm there was no mistake? Transparency, Truth and knowing the Facts for these particular services are absolutely necessary. People have the right not to be part of a Blind Experiment. These Systems need to be Open, Monitored and Audited in order for us to work accurately and efficiently.  

Google censors search results, while at the same time they kill small businesses, and not only that, they influence other people to censor information and corrupt the system. Why do corporations get greedy and criminal? And why do they cause others to repeat this madness? Money and Power is a cancer in the wrong hands.

Problems with Google

Life through Google's Eyes, Google's instant autocomplete that automatically fills in words and phrases with search predictions and suggestions. Sometimes with disturbing results.

Google Algorithm, works OK most of the time, but it is also used to censor websites unfairly. Corruption at its worst.

Penguin
EMD
Panda
Google Bomb
Search Engine Failures
Algorithms
Search Algorithm
Human Search Engine

Internet 
Internet Safety

"If you are indexing information, that should be your focus. If information is judged on irrelevant factors, then you will fail to correctly distribute information, which will make certain information in search results unreliable, illogical and corrupted."

In the mean time when searching the Internet, going several pages deep on search engines will also help find information because the first 10 choices are sometimes irrelevant. I have sometimes found things that I'm looking for 30 pages deep. You will also find different key words, phrases and characters within the search results that may also help increase your odds of finding what you're looking for. Sometimes checking a websites links on their resources page may also help you find websites that are not listed correctly in search engines. Web Searching for Information needs to be a Science.  

Human Search Engine Tips

Most search engines like Google have Advanced Searching Tools found on the side or at the bottom of their search pages.

Knowing where to type in certain characters in your search phrases also helps you find what you're looking for.

If you want to limit your searches on Google to only education websites or government websites
then type in "site:edu" or "site:gov" after your key word or search phrase.
For example Teaching Mathematical Concepts site:edu
For searching a specific website type in "neutrino site:harvard.edu after the word or search phrase.

To narrow your searches to file types like PowerPoint, excel or pdf's then type in filetype:ppt after the word.

For search ranges use 2 periods between 2 numbers, like "Wii $200..$300."

Using quotes or a + or - within your search phrases. Example, imagine you want to find pages that have references to both President Obama and President Bush on the same page.
You could search this way: +President Obama+President Bush
Or if you want to find pages that have just President Obama and not President Bush then your search would be
President Obama -President Bush.

If you are looking for sand sharks search engines will give you results with the word sand and sharks but if you use quotation marks around "sand sharks" it will help narrow your search.

Using "~" (tilde) before a search term yields results with related terms.    Regular Expression 

Conversions try typing "50 miles in kilometers" or 100 dollars in Canadian dollars.

Use Google to do math just enter a calculation as you would into your computer's calculator
(i.e. * corresponds to multiply, / to divide, etc)

To find a time in a certain place type in    Time: Danbury, Ct
Just got a phone call and want to see where the call is from  Type in 3 digit # area code
Type any address into Google's main search bar for maps and directions.
While on Google Maps select the day of the week and the time of day for the traffic forecast.


What are people Searching for and what Key words are they using

Search Query Trends
Google Insights Search Trends
Google Trends  Google
Yahoo Alexa Web Trends

You can learn even more great search tips by visiting this website Search Engine Watch.

Learning Boolean Logic can also help with improving your Internet searching skills.
Boolean Operators (youtube)


Internet in 60 Seconds

More Amazing Numbers and Facts





The Thinker Man