Introduction: Twitter, Facebook, Instagram. What do all these and many other social media networks have in common? The all have functionality to show your location, give personal “tid-bits,” show pictures and generally create a public profile . These can all be a great way to stay in contact with friends and family despite long distances. They can also be a great way to get hired or show your talents and professional abilities. These social media networks can end up causing security catastrophes. This essay aims to discuss various technologies and methods used to socially engineer social networks. It will accomplish this by using resources from various experts, firsthand accounts and social experiments. Following these discussions, a conclusion will summarize the findings of this essay and provide contemplation topics for the reader. Social engineering may seem like a conspiracy theory, but despite what one may think it is a prevalent method currently in use by many organizations. It happens... Big organizations get targeted for social-engineering. In 2013 the New York Times fell victim to their website being defaced and this is accredited to a social-engineering attack. Countless organizations are victims to this prevalent hacking technique. Social engineering is usually done through emails or even face-to-face contacts. Those techniques can be easier to mitigate for many companies because they can enforce policies and procedures that protect themselves and their employees. A not so easy tactic to prevent is when attackers turn to social media for social-engineering.
Nearly everyone has a Facebook, LinkedIn or Twitter account . Many organizations have “Employees [that] are on Facebook, LinkedIn, Twitter and Quora, and they are adding personal information to the Web every single day” . This can be a big concern for companies because many employees will use these accounts to explicitly or implicitly state where they are, where they might be or what they are doing. A simple social-engineering attack may be simpler than people may think.
An attacker might begin by desiring the data and records from a specific organization. They can begin by getting on the internet and going to LinkedIn. From this website they can search for that specific organization. This will allow them to have the possibility to acquire many contacts. Then they may acquire “Job titles, employment histories, education history, affiliated organizations, business contacts and in some cases their [employee] pictures” . Based off this information the attacker could now get an idea of the hobbies they like, family relations and Facebook accounts. After this information is acquire a slightly technologically savvy attacker could spoof a text message from a business associate to the targeted victim. This text message could be the beginning of the end for an organization's security measures. This is just one of countless ways an attacker could use a social network for nefarious desires.
The scary stuff… It is fun to be able to update friends and family concerning one's whereabouts even safety can come from this, but widely available software might get the attention of individuals against streaming constant updates. Social networks love giving members the opportunity to inform others what is currently happening in their lives'. Social engineers also love this prevailing capability. Geolocation profiles are essentially dossiers containing as much information as possible about a targets' daily routine. These profiles are created by going through Facebook updates, Twitter updates and any other social media the target might be subscribed to, in order to obtain location updates. This information is used to find the target's physical location on the earth. Then, a "routine" is essentially written out that shows where the target is during the specified parts of the day.
A potential target might leave their house on their way to work and grab a coffee from Starbucks every morning. When they get to Starbucks they might take a picture of their morning coffee with some clever comment and post it to Facebook. An attacker might target their Facebook and be able to map out the days of the week that individual works and when they work based on these status updates. They may also be able to see what Starbucks location the individual goes to every work day. This is how an attacker can begin to build a geolocation profile.
Cree.py is a well-designed easy to use program used for creating geolocation profiles . This software comes with a neat tutorial and can install on just about a Linux or even Windows distribution. This software allows one to simply type in the user name of the social media account that the target subscribes to, and the software begins looking for any location updates. From here one can export the file to Google Earth and then the magic happens. The software will gladly create an entire map with times and dates of where the target was when they made a status update. This software is available for anyone, it is free and easy to install for nearly anyone that knows a little bit about technology. How does one protect themselves from such social media engineering techniques?
Robin Sage The Robin Sage experiment was a social-engineering experiment that used social media as the primary method . This social experiment was conducted for 28 days. During the course of these 28 days a profile was created for a fictitious female “security analyst” that happened to have an attractive profile picture. Thomas Ryan conducted this experiment as a way to draw attention and concern to this type of attack. Over the 28 days Ryan used this profile to gather “hundreds of connections through various social networking sites” . The most concerning part of the study was that Ryan was able to obtain “information revealed to Robin Sage [that] violated OPSEC procedures” . Ryan's fictitious profile was even asked to come and “speak at a variety of security conferences” . The conclusion of this case study show that one needs to beware of seemingly friendly unknown business connections.
Psychology Social engineers are essentially applying basic well known techniques from human psychology. “The trigger most often used by an attacker is called 'the strong affect.' This trigger uses a heightened sense of emotional state, such as fear, panic, excitement, or grief in order to get the victim to take an action” . This theory is often used in combination with breaking news and malicious links via social media. An attacker will wait until a news story becomes mainstream such as a celebrity dying or a plane crash. Once this has happened the attacker might take advantage of a social media technology like Twitter. They can Tweet a comment with the Hashtag that links to the mass event with a link in the Tweet stating something like, “Get the full story here” or some seemingly promising title. Any victim that clicks the link might get malware installed on their device. The best ways to thwart a social engineering through social media attack is usually education.
Conclusion Perhaps, after reading about all the dangers of using the internet one might want to stay as far away from it as possible. Although this would be a viable technique to avoiding social engineer networking techniques it is not very achievable in the information age in which society operates. There was a time when society was positive the earth was flat, and even put people to death if they disagreed. Thankfully, education tends to enlighten the minds of many.
As with most things in life education is the key to success. Whether this is success in marriage or success in avoiding scams and malicious schemes, the more an individual is educated and knowledgeable on the given subject the greater their chances are of survival. It is important to never stop learning about new technologies and their benefits and downfalls. If one does this they will avoid most hardships and heartaches that can come from being ignorant.
The best way to prevent social engineer networking is to stay educated on the topic. It is best to remember that, generally, if it is too good to be true it probably is not true, Also, do not put information on any social media network that would compromise the security of your home, family or workplace. Finally, one needs to be smart with the exchange of information, i.e., one should always be constantly vigilant with regards to personal information sharing. Following the previous suggestions will likely increase one's personal security and help detour any social engineering attacks through social networking.
Since societies have originated there have always been individuals that decided to go against common acceptance of societal rules. In the modern age we face robberies, theft of property, and destruction of property in other forms. These problems are becoming an issue for the digital world. Malicious hackers, sometimes rumored to be funded by government agencies or working on their own, have begun to develop software that unifies Artificial Intelligence (AI) with malicious hacking techniques.
This paper attempts to explore some of the most common and uncommon AI hacking techniques. The first topic to be discussed will be referred to as AI hacking attacks. After that it will discuss techniques that use AI to fight hacking attacks. The penultimate topic it will discuss is “bleeding edge” technology that involves AI and presents new possible concerns for hacking attacks. Finally, a brief summary of what was discussed will conclude the essay.
Known AI Hacking Techniques: Malware is a growing problem for anyone that accesses the World Wide Web (WWW). It has been estimated that “web based attacks increased 36% with over 4,500 new attacks each day in 2012 .” These increases in attacks are almost inconceivable and the same report states, “In 2011, Symantec Internet Security reported that ∼ 403 million new variants of malware were created, a 41% increase from 2010 .” Clearly, malware could be understated as the new black plague. The majority of attacks demanding accolades are done by highly skilled hackers. “State sponsored highly skilled hackers are developing customized malwares to disrupt industries and for military espionage .” The first generation of malware created had a static structure to its program. With the emergence of second generation malware researchers are finding that the structure of the program is changing in a variant of ways. Second generation malwares are often categorized as the following: encrypted, Oligomorphic, Polymorphic and Metamorphic Malwares .”
Encrypted malware works by using an encryptor and decryptor. It begins by decrypting the main body of the code when the program is run. Each time the malware is run the main body is encrypted in order to hide its signature from the anti-virus software. But, eventually the anti-virus software is able to detect the malware because the decryptor does not change from each version of the malware. According to , the anti-virus software is able to recognize the code pattern through looking for the code signature. Signature detection works by extracting unique bytes from the malware code until enough bytes can be used to create a unique signature. Then the scanner checks the computers programs for these bytes and if it is found then it alerts the user. This is an effective way of detecting known malware. The signature must match exactly in order for the scanner to detect it. Naturally, malicious hackers developed a way to change the decryptor so that it is harder to detect the code.
Oligomorphic means that something can be changed in a few ways; oligomorphic malware does exactly that. Hackers devised ways to create multiple decryptors. “At most this malware can generate few hundred different decryptors, e.g. Win95/Memorial had the ability to build 96 different decryptor patterns .” Inevitably virus scanners eventually detect the malicious software. Oligomorphic software led to the next evolution of malware which is polymorphic malware .
Polymorphic or many forms, is the newest known wave of malware. “In Polymorphic malwares, millions of decryptors can be generated by changing instructions in the next variant of the malware to avoid signature based detection .” The technique involves a “mutation engine that creates a new decryptor which is joined with the encrypted malware body to construct a new variant of malware .” Included in this technique is malware obfuscation. Obfuscation is simply obscuring the code through various means. Some of the techniques are “dead-code insertion, register reassignment, subroutine reordering, instruction substitution, code transposition/integration etc. .” Anti-virus programs use the emulation technique of recognizing code signatures to eventually detect malware.
Metamorphic malware exhibits the ability to change the actual body of the program. All the other techniques mentioned only change the encryptor and/or decryptor. Metaphoric malware is virtually undetectable because the signature can be mutated. Only a few viruses have been considered truly metamorphic. The first was detected was “in 1998 called … Win95/Regswap. In 2000, Win32/Ghost virus was created with 3628800 different variants. One of the strongest metamorphic malware W32/NGVCK was created in 2001 with the help of Next Generation Virus Creation Kit (NGVCK) .” These are clearly the beginning stages of malicious hackers utilizing AI in their programs. AI is also being developed on the other side of the spectrum namely malware detection developers.
Using AI for detection: Researchers in academia and industry settings have be working together to develop new methods of detection malware. According to , current research involving machine learning claims to be able to exceed 90% detection accuracy through classification methods with only 20 features. This method could even improve the capability to detect future malware before it is widely known. “Popular machine learning techniques among the researchers for the detection of 2nd generation malwares are Naive Bayes, Decision Tree, Data Mining, Neural Networks and Hidden Markov Modes .”
Another method of detecting polymorphic malware is by determining the information sent and the expected information received. For example, A server receives from a client device a hash value and metadata associated with an electronic file. The server determines that the received metadata relates to corresponding metadata stored at a database, the corresponding stored metadata being associated with a further hash value that differs from the received hash value. A determination is made that each of the received hash values have been reported by fewer than a predetermined number of clients and, as a result, it is determined that the electronic file is likely to be polymorphic malware [3:1].
In that example we can see AI being utilized by a program understanding to some degree what the information requested should look like and about how much of it should be received. When using AI for intrusion detection there are generally three principles focused on: “data, classification and modeling techniques and system infrastructure .”
These techniques provide a foundation for AI software to evaluate and find malware. Some of the techniques used are linear modeling methods, non-linear modeling methods, and probabilistic models. An example of a linear modeling method is the principle component analysis. This method uses AI to turn a set of data into uncorrelated latent factors, or hidden variables that are derived from original data, and then a principal component analysis (PCA) is made. The PCA is used to essentially capture as much variation in the data as possible and according to , anomalies are considered outliers which raise the alert that there could be malware detected. Non-linear models are techniques such as, clustering and K-nearest neighbor (KNN), neural networks, fuzzy logic, and many others. Perhaps, the most illustrious probabilistic model is Bayesian networks. There are many techniques that the field of AI has to contribute to malware detection. The general consensus according to , is that anti-malware software is still trying to catch up with malware so hopefully AI researchers will help make significant breakthroughs.
Malware of the future and conclusion: Futuristic malware may come in new forms. Malware essentially is playing a game of cat and mouse . It needs to hide from the detection software long enough to figure out if it is on the type of system it is targeting. Once malware is discovered it becomes obsolete. Software updates are sent out to users everywhere and systems are updated. To add to the cat and mouse analogy malware does not want to expose itself if it has made its way into a research lab or honeypot that probes for new types of malware. It needs to be able to tell if it is on a real user system or whatever it may be targeting. Some experts believe that malware will come in forms of useful software. Malware may try “to avoid detection, it makes sense to hide its true intentions behind genuinely useful properties … ‘In some cases, it may just be easier for the malware to do useful stuff on our computers – actually cleaning up our hard disks, say – before it later attacks, in order to seem genuine.’ .”
Other experts believe that the future of malware will reside in using social engineering as the prominent attack vector. “‘The lowest hanging fruit is still humans,’ said Ken Westin, a security researcher for Tripwire. ‘As long as attacks against humans still work consistently attackers will use them on their own, or as part of sophisticated, integrated campaigns.’ .” Perhaps, the scariest of attack vectors involving AI and malware is the recent advances in Brain computer interfaces (BCI). BCI technologies may also potentially be vulnerable and expose an individual’s brain to hacking, manipulation and control by third parties. “If the brain can control computer systems and computer systems are able to detect and distinguish brain patterns, then this ultimately means that the human brain can potentially be controlled by computer software .”
References: 1 A. Sharma, S. K. Sahay, “Evolution and Detection of Polymorphic and Metamorphic Malwares: A Survey,” in arXiv, online http://arxiv.org/ftp/arxiv/papers/1406/1406.7061.pdf
2 I. You, K. Yim, Malware Obfuscation Techniques: “A Brief Survey”, in IEEE magazine, http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5633410
3 Timo Harmonen, “Identifying Polymorphic Malware,” U.S. Patent 8,683,216, Mar. 25, 2014.
4 Rehman, A., & Saba, T., “Evaluation of artificial intelligent techniques to secure information in enterprises,” in The Artificial Intelligence Review, 42(4), 1029-1044. 2014, doi:http://dx.doi.org/10.1007/s10462-012-9372-9
5 A. Martin, “Future malware might offer real functions to avoid detection,” Oct. 9, 2014, online http://www.welivesecurity.com/2014/10/09/future-malware-might-offer-real- functions-avoid-detection/
6 T. Brandley, “What data breaches teach us about the future of malware: Your own data could dupe you,” Jun. 9, 2014, online http://www.pcworld.com/article/2360762/what-ebay- taught- us-about-malware-your-own-data-can-be-used-to-dupe-you.html
7 M. Xynou, “Hacking without borders: The future of artificial intelligence and surveillance,” Mar. 15, 2013, online http://cis-india.org/internet-governance/blog/hacking-without- borders-the-future-of-artificial-intelligence-and-surveillance
Open source software is similar to peer-reviewed articles. Despite its ability to catch problems it can still fall victim to the frailties of men. This essay will discuss the recent eavesdropping scandal involving the open source web browser Chromium. It will do this by explaining Google's involvement with Chromium. Then, a brief discussion on open source software will be made. Finally, this essay will draw sources from many primary accounts. It will use ticket responses posted on Google's Chromium code discussion boards and a couple of related articles in order to create discussion topics. This essay poses the general question to the reader as a point of reflection: does open source software always result in excellent programs?
The eavesdropping bug recently discovered in the Chromium web browser is a concern for anyone using it. The crux of the problem is that Google released a binary blob that Chromium downloads and installs because of its default settings. A binary blob is a piece of code that is closed source; it cannot be accessed by anyone and is usually proprietary code not intended to be viewed by others outside of the sphere in which it was created. These default settings in Chromium allow the computer to listen to anyone within "earshot" of the computer microphone. "The key here is that Chromium is not a Google product (we do not directly distribute it, or make any guarantees with respect to compliance with various open source policies). Our primary focus is getting code ready for Google Chrome" . This is not inherently a Google problem but since it affects the open source community and the binary blob is a product of Google, most people became outraged.
Google was ultimately responsible for creating a bug in Chromium. The other issue is that Google Chrome automatically installs the "hotword" module. The module is essentially always listening for the hotword "OK, Google." Google says that "While we do download the hotword module on startup, we do not activate it unless you opt in to hotwording" . Google claims that this module is only intended to be used on Chrome, "We call extensions that are built into, or automatically downloaded by, Chrome "component extensions " and we do not show them in the extension list by design" . This essentially says the Google does not feel the need to tell people that this module is active on their version of Chrome. However, Google did decide to make the hotword module no longer a default piece of software in Chromium. "Note: Chromium will no longer download/install the hotword Shared Module, and will automatically remove the hotword Shared Module on startup if it was previously installed" .
The open source community is great because it allows many eyes to look over a program and find problems. "The 2004 report of the California Performance Review, a report from the state of California, urges that; the state should more extensively consider use of open source software" . Although open source programs are still susceptible to bugs and security concerns they are discovered and are remedied, usually, much faster than proprietary software.
Open source software is acclaimed by many to be the best way of testing code. Open source software often drives people to work harder than they would at a normal place of business. These volunteers are often "Motivated by the personal benefit of using an improved software product and by social values such as altruism, reputation and ideology" . The only major concern generally associated by open source software is usability. According to one study, "Most developers have a very limited understanding of usability, and there is a lack of resources and evaluation methods fitting into the OSS paradigm" . But despite these minor concerns open source projects are often of a very high caliber. In regards to this "Software developers have produced systems with a functionality that is competitive with similar proprietary software developed by commercial software organizations" . This is remarkable considering this software is done essentially free of pay or public recognition. Despite the open source movement companies are still producing great software capable of many exciting new functionalities.
Many devices are available to the public that include similar capabilities as the aforementioned Chrome hotword module. Recently, Samsung produced a smart TV capable of voice command. People were uneasy when they discovered the fine print in the user manual stating, "'Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party'" . Motorola's MotoX phone has the capability to actively listen for a hotword, which works very similarly to Chrome's hotword module.
Consumers today, need to be aware and beware of the functionality of their electronic devices. Increased ease of use is always a goal of technology companies such as Google. Concerns and questions arise when these companies tread on new domains. Areas of privacy and security are always tedious topics for new technology.
 https://code.google.com/p/chromium/issues/detail?id=500922#c6 This ticket post talks about Googles stance on the hot-word module. Google claims that the Chrome browser is their own browser and thus it does not need any new permissions to install the hot-word module. The also claim the Debian has to take care of their own bundles of Chromium. Google reports that it is not their concern what Debian does with a Chromium bundle. Thus, Google decides to not fix the Chromium problem concerning the hot-word module. Specifically, with the concern that the hot-word module is a "binary blob" that is essentially a black box of code in and open source product. Google decided to not fix the problem and explains that it is the problem of third party developers such as Debian, concerning the bit of black box code.
http://www.theguardian.com/technology/2015/jun/23/google-eavesdropping-tool-installed-computers-without-permission This article talks about how the Google Chrome web browser does not automatically opt users in while the open source Chromium web browser does. Google claims that it is not their fault that the open source Linux distribution decided to download the plugin with the Chromium browser. Google feels like it is not their responsibility to test how their hands free search feature works with programs that are not theirs. The open source community is upset simply because this feature does not display its source code although it does get including in the Chromium browser which is open source. This article primarily discusses the debate between Google's attitudes toward the Chromium browser.
 http://www.dwheeler.com/oss_fs_why.html This article talks about the use of open source software. According to this review of software most of the software that was done by volunteers was more secure and less prone to mistakes. This essay attempts to persuade the reader to consider using this type of software for more purposes. It also gives compelling evidence for state and federal run institutions to use this type of software. It is very lengthy and has many sources that are outdated. But the sources used for this essay were checked to be working links and accurate sources.
http://sloanreview.mit.edu/article/what-makes-a-virtual-organization-work-lessons-from-the-opensource-world/ Massachusetts Institute of Technology (MIT) produced an article that tries to understand the open source programming paradigm. It is a business article that relates this paradigm to the business world. It presents ideas and research that could be useful to business owners. Businesses are still working on a 20th century platform whereas technology is evolving to a less commercial driven ideology. This essay attempts to understand why this is and how business need to adapt in order to keep hiring great employees.
 http://www.matsc.ktu.lt/index.php/ITC/article/view/11776 This essay was written as research from Kansas University of Technology. It is essentially a survey and study of the open source paradigm. "Open Source Software (OSS) development has gained significant importance in the production of soft-ware products." As a result of this, this essay attempts to study the effectiveness of OSS. It draws the conclusion that OSS is good and less error prone that many commercial products. The main problem is that it often is not at user friendly as most commercial software.
 http://rt.com/uk/230699-samsung-tv-listens-privacy/ The article discusses various concerns about the eavesdropping in general on new electronic devices. It discusses the rising popularity of cell phones and smart TVs new voice activated features. This source is not a primary source it just discusses various ideas concerning the newest eavesdropping features. It talks about Samsungs smart TVs and how they can be activated with key words. Likewise, the Samsung phones can also interact with a user through voice recognition key words.
 https://en.wikipedia.org/wiki/Binary_blob This simple defines binary blob. It is wikipedia so it was not used as a in text citation. This was used to give the reader a general understanding of the phrase "binary blob." It discuss what one is and how it is commonly used. It also discusses what open source software is and why binary blobs are considered bad practice for use in open source software.
 http://fullstack.info/not-ok-google/ This is an eye witness account of a blogger that records his reactions to discovering the Chromium eavesdropping bug. He reports that the LED on his microphone for his computer kept turning on and off. He begins by checking what resources might be using it on this computer. After being a little confused about the problem he does a little online research and finds out that it’s the Chromium browser that is responsible for the microphone turning on and off. This is a reaction and an open question presented by an eye witness of the eavesdropping bug. It is cited in a post from "The Guardian" and is perhaps one of the first recorded reports of the event.
 https://code.google.com/p/chromium/issues/detail?id=500922#c44] This is a ticket response to the Chromium web browser automatically downloading the hot-word module. Google reports that the newest version of Chromium does not download with the hot-word binary blob. It requires one to go to the Chrome web store and activity choose to download it. They report that they do care about the open source community and think it the hot-word module should be excluded default Chromium builds. They also report that the module runs in a sandbox. This is important because it helps to add security to the feature should one decide to use it. Google reports to have released the version of Chromium that does not have the hot-word module. They explain that even with the module installed in chrome one needs to activate the extension and then enable it. Once it is enabled it still does not process anything spoken until the key word "Ok, Google" is spoken. The processing until the key word is spoken is done on one's computer and thus no statistics are sent back to Google.
P = NP, a question that has tantalized mathematicians, complexity theorists and computer scientist for decades. If there is a problem that has a solution and that solution is easily verified by a computer, then can that same problem be solved quickly by a computer? The method of studying circuit complexity is proving to be the best ace in the hole for many computer scientists. Circuit complexity theory may become the instrument used to solve the most significant question placed by Clay Mathematics Institute, P = NP, and many other hard problems.
The purpose of this research is to explain the basics of circuit complexity so that a lay person will understand why it is used, survey a brief history of it, explain some of the more prominent uses of the theory and current standing concerning this technique with the P = NP problem. This will be accomplished by first giving a brief introduction to the methodology by identifying and explaining key terms and phrases. Then, giving a brief history of the study, how it could be used to solve P = NP and finally explaining the forays into various abstract applications. The methodology used to complete this study will be simple: peer reviewed articles will be used in explaining key terms, and textbooks on the subject will be utilized in explaining the possible P = NP proof. Multiple sources will be used to find current and future applications which will include secondary sources, e.g., related articles in the magazine Popular Mechanics.
The English mathematician George Boole may never have known the fingerprint he would leave on the future. Do we have giants in library and information science ? George Boole was a self-taught mathematician  who was indeed an intellectual giant. By the age of twenty-four he had already submitted papers to various mathematical journals . The Royal Society awarded him a metal at the age of twenty-nine in 1844. Boole was a strong supporter for unionizing the field of formula Logic and mathematics in which he wrote his paper Mathematical Analysis of Logic . He believed that the study of logic was more closely associated with mathematics rather than metaphysics and philosophy.
In 1938, Claude E. Shannon was an aspiring research assistant at MIT (Massachusetts Institute of Technology) he began studying the similarities between Boolean algebra and telephone switching circuits. Telephone switching was a process which was done manually by an operator. They would make a physical connection between two parties so that they could communicate with each other. Boolean algebra has a couple main differences compare to conventional algebra. Boolean algebra has only two symbols to represent the language: 1 and 0. The other main difference is that the + symbol is analogized with an AND and an OR. The AND is exclusive to idea that both values must be true in order to result in a true statement. The OR operator formulates the idea that only one of the values needs to be true in order to make the statement true. The final operator is formulated with a NOT symbol which is represented with a bar over the representative value indicating the opposite of the value. The principles of Boolean algebra and telephone switching became known as algebraic circuits . Boolean circuits seemingly expand across the infinite possibilities of the reaches of mathematical models. Specifically, a Boolean circuit is an aggregation of gates, inputs and outputs. Boolean circuits have one of the unique traits of being simple yet continuously complex. An entire expanse of study as evolved around them, insomuch that circuit complexity has because a field of study. Circuit complexity has volumes of books such that an entire library section could be devoted to it. This study will remain in the breadth of this subject, and allow the reader to devote their own time to researching the depth of its possibilities.
Circuit complexity The genesis of circuit complexity is based on the size of a circuit, depth of a circuit and circuit families. The size of a circuit is simply the number of gates in a circuit. The depth is the longest wire from an input to the output. A family arises due to the problem of having a fixed number of inputs for a specific circuit. Therefore, any particular circuit can handle only inputs of some fixed length, whereas a language may contain strings of different lengths. So instead of using a single circuit to test language membership, we use an entire [family] of circuits, one for each input length, to perform this task . The heart of circuit complexity lies in classifying problems into complexity classes. There is a plethora of complexity classes.
The main complexity classes concerned with this study will all be time complexity classes. The primary ones discussed are polynomial time, nondeterministic polynomial time, and AC. AC is a complexity class that exists for circuit complexity and consists of all problems that are recognized by a Boolean circuit with a circuit depth of O(login). It has a polynomial number of unlimited fan-in AND, OR and NOT gates. A fan-in gate of 3 would be a gate, e.g., an AND gate that has 3 inputs. Polynomial time problems exist in everyday natural life. It essential concerns problems that take a reasonable amount of time to solve on a computer. Essentially, a non-determinist polynomial time problem is one that takes an exorbitantly long time to solve on a computer. A classic example of this is factoring a given number. This is simple at first but becomes very hard for very large numbers. It is important to classify problems into their respect categories because understanding problems’ complexity classes gives insight to the problem.
Circuit complexity acts as a tool to classify problems. There are many parts of circuit complexity that this study will not have time to discuss. It is strongly recommended that the reader take time to understand a little more about circuit complexity. Such that they can more fully comprehend the incompleteness of this brief survey.
Applications There are various practical reasons for studying circuits. They give one the ability to talk about finite measurements of a problem. For example, Let INTEGER − FACTORING1024 be the problem of finding a prime factorization of a given 1024-bit integer n. After the encoding of numbers and factorizations have been fixed, it is sensible to ask: what is CU2(INTEGER − FACTORING1024)? Is it at most 1010 ? This gives one the ability to discuss problems with a finite input. In contrast if one were to discuss a finite problem given to a Turing machine to solve, the Turing machine could simply solve the problem in linear time. It could simply embed a huge lookup table for each instance of the problem in the transition table of the Turing machine .
Another benefit of circuit complexity is the separation of complexity classes through discovering lower bounds of circuits. Essentially, if there is a problem with easy functions that still require very large circuits we can classify the complexity of the problem. Likewise, the upper bound of a circuit can also classify a problem if the given problem has very hard functions, but the circuits are small then one can classify the problems’ complexity. There are many reasons to use circuit complexity, and likewise there are various methods to using it. The main methods used in circuit complexity are restriction methods, polynomial methods and brute force methods.
Methods Brute force methods are the simplest in idea. To brute force something one must begin without any prior help or knowledge, and then begin trying all the permutations in hopes of finding a solution. A function or problem is made that is intended to not be computable by a circuit with the same complexity. If one can prove that a circuit with the same complexity cannot compute a problem of the same complexity, then it can be induced that the problem in not in that complexity class. This is an iterative process where a function is given to a circuit and if that circuit can compute it then a slightly harder version is given until the circuit cannot compute it, i.e., return an output of TRUE.
The polynomial method works by constructing a circuit that has polynomial complexity. Once this is done then one uses this circuit to compute polynomial problems. If the problem cannot be computed, i.e., the circuit returns a FALSE then the problem is proven to not be in the polynomial time complexity class. The restriction method works by using multiple functions on a circuit and then simplifying the circuit. The n-bit functions are selected and related to k-bits such that k ¡ n. Some of the n-bits are set to a constant value, and then the circuit is simplified through means such as gate elimination.
This is simply when a gate is eliminated from the circuit whilst causing the circuit to still yield equivalent results. Essentially, this process continues until the circuit becomes too simple to compute a function on k-bits. Hence, the circuit is restricted in what if can compute.
P v. NP What does circuit complexity have to do with P (polynomial time) and NP (non-determinist polynomial time)? The simple answer is; it has everything to do with P vs. NP. Circuit complexity has very strong techniques that have been utilized to figuratively chip away at the P vs. NP question. In order to fully appreciate this conundrum a short discussion will give concerning it. The biggest implications of P vs. NP are currently in the field of security. Specifically, computer security. RSA is an encryption algorithm that is in the class of NP. This means that there is currently no known solution to RSA that can be done in any reasonable amount of time on a computer. Although there is no known solution, it is not yet proven. For this reason, it remains possible that there might still be a solution to RSA. Most of the scientific community believes that P != NP as noted in the following quote by Sipser: This is the situation with so called exhaustive search problems, including: the minimization of Boolean functions, the search for proofs of finite length, the determination of the isomorphism of graphs, etc. All of these problems are solved by trivial algorithms entailing the sequentized scanning of all possibilities. The operating time of the algorithm is, however, exponential, and mathematicians nurture the conviction that it is impossible to find simpler algorithms .
There are a variety of problems that are assumed to be in the class NP and if they were found to not be in NP then the results would be immense. G¨odel presented this same idea to von Neumann: Since you now, as I hear, are feeling stronger, I would like to allow myself to write you about a mathematical problem, of which your opinion would very much interest me: One can obviously easily construct a Turing machine, which for every formula F in first order predicate logic and every natural number n, allows one to decide if there is a proof of F of length n (length = number of symbols). Let Ψ(F, n) be the number of steps the machine requires for this and let φ(n) = maxFΨ(F, n). The question is how fast φ(n) grows for an optimal machine. One can show that φ(n) ≥ k · n. If there really were a machine with φ(n) ∼ k · n (or even ∼ k · n2), this would have consequences of the greatest importance 
If it were possible that RSA was not in NP then encrypted bank account information could be gathered easily, confidential business acquisition could be discovered and of course classified government communication could be easily eavesdropped. Circuit Complexity was once thought of as the means to an end. Unfortunately, after much research and inadequate results most believe that circuit complexity is no longer the path to an answer. Most researches have turned to a technique called approximation methods in order to try analyze circuits and hopefully come to a conclusion regarding NP class problems.
Smith, E. S. (1993). On the shoulders of giants: From boole to shannon to taube; the origins and development of computerized information from the mid-19th century to the present. Information Technology and Libraries, 12(2), 217. Retrieved from http://search.proquest.com/docview/215833453? accountid=9817
Sipser, M. (2006). Introduction to the theory of computation. Course Technology Cengage Learning, 2nd edition, 354.
Williams, R. (2011). Topics in circuit complexity: An overview of circuit complexity. Lecture Notes for 9/27 and 9/29. Retrieved from http://web.stanford.edu/ rrwill/we
Sipser, M. The history and status of the P versus NP question. Massachusetts Institute of Technology: Department of Mathematics. p. 611. Retrieved from http://www.win.tue.nl/ gwoegi/P-versus-NP/sipser.pdf
Heise, G. (1988) G¨odel Editorial Committee. p. 3. Retrieved from
What are the current limits on machine intelligence? This essay attempts to explain the current and past forays into strong artificial intelligence, and what modern limitations it faces. This will be accomplished by surveying a limited amount of past and present literature from scholarly peer reviewed articles, notable novelists, credible news sources, and a leading textbook. These sources will be used to draw important points and validate topics. This essay will begin by explaining strong AI according to the definitions placed on it by leading researchers, discussing problem areas, and why strong AI is important. Then a brief review of previous work will be discussed in a breadth-first manner. Then a depth-first analysis will be made on the topic of computer science’s concept of strong AI. Finally, a conclusion to the essay with the foreseeable applications of strong AI along with a statement concerning the future for strong AI.
Has mankind reached the limits of Strong AI or are we limited by the philosophical, mathematical, and physical constraints placed on ourselves? Philosophers such as Aristotle and computer scientists such as Alan Turing have willingly subjected themselves toward the ponderings of what it means to create AI. “Turing [cites his] … argument from consciousness—the machine has to be aware of its own mental states and actions .” Philip K. Dick, author of “Do androids dream of electric sheep ” and Isaac Asimov  have prophetically predicted modern advances in technology. Those authors have predicted advances such as, Transcranial Direct Current stimulation, Viking mars landers, and the Curiosity mars rover with near exactness. Science fiction novelists, Greek philosophers, and Mathematical/Computer Scientist clairvoyants alike have hypothesized the rise of strong AI. What do modern researches in the field of artificial Intelligence have to show for some of those grandiose predictions?
Russell and Norvig quote Turing by suggesting: Not until a machine could write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it .
This is the quintessence of strong AI according to most computer scientists. Philosophy tends to have more questions on the subject of what strong AI means. Philosophers tends to suggest that the idea of a system that exhibits strong AI: Either falls into algorithmic regress or starts to behave like humans; i.e., by either reacting ‘dualistically’ and adding its inner states and properties as new and irreducible features of its ontology; by becoming an eliminativist and denying that its inner states and properties are really there at all; or by adopting that unsatisfactory, ‘forever unfinished’ version of physicalism wherein, though it insists these inner states are identical to something physical, it cannot find a plausible candidate .
That implies, philosophers believe strong AI to mean: thinking like a human. They suggest that some believe that when a system develops strong AI it will have an inner crisis to deal with, and it will have to decide what to do. They take the definition to the extreme end of believing that it will have the same problem mankind deals with, it must decide if it has a spirit, if it’s a sum of its parts, or if it exists purely because of it's evolution. Luc Steels a professor at Universitat Pompeu Fabra Barcelona, suggests that strong AI will develop as a unique robot “culture ” evolves on its own. Robots will learn to think on their own and develop as a separate species not as a mimic of mankind's. He also suggests that pattern recognition is not the way to develop strong AI. An approach concerning the evolution of language and speech is the path to the beginning. He suggests that if a system is to become self-aware it must have some way of developing a form of unique evolutionary communication between its own species (robots).
Many research fields have their own interpretation of what strong AI truly means. In a more futuristic methodology, professor Nicholas Agar of Victoria University of Wellington discusses another view, “Mind-uploading [which] is a futuristic process that involves scanning brains and recording relevant information which is then transferred into a computer. … Searle’s Wager imagines candidates for mind-uploading being asked to place a bet. The success of mind-uploading is contingent on the truth of Strong AI .” This can be seen as another paradox that suggests strong AI is the existence of a human’s “brain-information uploaded ” in a computer system. Albeit, this may seem as an existential exercise it is presented to show the many views on what seems like is a simple term to define: strong AI. The rest of this paper will carry on with the assumption that computer scientists agree with Norvig and Russell’s response to a definition of strong AI, which is ““Can machines think” ?”
There are many glaring problems with developing strong AI. One problem is that we do not understand how our own brains works. That entails the question: how do we create something we do not fully understand? That is actually less of a problem then it may seem. Scientists still research why water is so different for other liquids such that it expands when it freezes, yet we can still make ice, use it and control it. Clearly, not fully understanding intelligence is not enough to rule out the development of strong AI. Unfortunately, that is not the only problem in order to develop strong AI. Another problem for strong AI is the fear of it overcoming mankind. “As number theorist G. H. Hardy wrote … “A science is said to be useful if its development tends to accentuate the existing inequalities in the distribution of wealth, or more directly promotes the destruction of human life” .” That obviously prompts the type of uncertainty that is accompanied with the question: what might happen if strong AI was developed? This forms many postulations involving political motivation which is not in the scope of this essay. Finally, a general area of concern for strong AI is an ethical domain. Questions surface such as: what do we (the human race) do regarding strong AI rights and privileges? Should a self-aware system have rights equal to humans? For this reason, leading companies like Google have developed a “New AI ethics board [that] might save humanity from extinction: ”
“… You want to make sure that the machine makes ethical decisions.” The technical challenges of that are daunting. But even more complex may be deciding whose values inform the moral code of the intelligent machines who could be our teachers, caretakers and chauffeurs ... But then who would we trust to develop a "10 commandments" for ethical AI? Do we trust governments to bear that responsibility? Religious leaders? Academics? Whoever decides will likely impact human life as much as the workings of the AI . Apart from the theoretical problems strong AI still faces the monumental task of where do we begin? What programming languages should be used? What hardware systems are available? As with many new horizons there are many questions to ask and very few answers.
Why it is important
Research into strong AI is already afloat for this reason we need to continue asking ourselves: what is important? Strong AI has the potential to revolutionize mankind. It will most likely be a gradual and mild gradient. It has the ability to help solve a plethora of issues. Issues involving healthcare, transportation, delivery services, and manufacturing just to name a few. All aspects of life could easily be affected.
Review of previous work
This section will discuss the previous research that has already been explored. AI has been hypothesized and in the case of “The Turk ” imitated for the last millennium and perhaps more. In the case of strong AI many paths have also been explored. We will start with the Robotic endeavors.
Despite a very strong synergy between Robotics and AI at their early beginning, the two fields progressed widely apart in the following decades. However, we are witnessing a revival of interest in the fertile domain of embodied machine intelligence. This is due in particular to the dissemination of more mature techniques from both areas, to more accessible robot platforms with advanced sensory motor capabilities, and to a better understanding of the scientific challenges of the AI-Robotics intersection .
During the first attempts of strong AI there was a belief that the AI system needed a way to perceive the world around it. As previously mentioned this practice is becoming popular again. Steels’s  most currently published research involves that very idea. Even Google is staking a claim in AI as they have developed the “Driver-less car ” which surely is not strong AI but certainly pushes the limits on current weak AI (AI not designed to develop self-awareness). Those previously highlighted topics are on the cusp of robotic integration with AI.
Computer scientists explore the ontology of strong AI through algorithms. A more in depth analysis of strong AI is inclusive to this portion of the essay for the reason that computer scientists are the intended audience.
An algorithm, or automaton, begins learning without predetermined problem knowledge. Advanced models utilize some kind of predetermined knowledge, involving a two-level structure. However, these models reflect neither knowledge sources nor means of inheritance. At the same time, society mainly acquires predetermined knowledge from previous generations. The whole history of science is a magnificent example how previous generations provided knowledge for further development, great discoveries, and unexpected inventions for next generations. The great Newton wrote, “If I have seen a little farther than others, it is because I have stood on the shoulders of giants. ”
Algorithms are the way programs work; a set of finite instructions. Surely, this can only evolve a program so far. Perhaps algorithms that are able to pass on information learned from previous generations can push the perimeters of strong AI. Many aspects of computer science are roused by nature. Researchers have developed learning algorithms to predict landslides but “Most machine learning techniques achieve overall success rates of 75-95 percent .” Another group of researchers have developed data mining techniques inspired by bee colonies “Support vector machines (SVMs) are a relatively recent machine learning technique. One of the SVM problems is that SVM is considerably slower in test phase caused by the large number of support vectors, which limits its practical use. To address this problem, we propose an artificial bee colony (ABC) algorithm to search for an optimal subset … .” Weak AI such as the previous example is prevalent from the beginning of the search for strong AI. The reason this is mentioned is to draw awareness to the idea that some believe strong AI will develop by the merging of weak and strong AI. Goertzel asserts that an “Analysis of six different areas of applied artificial intelligence (AI) suggests that the next period of development will require a merging of narrow-AI and strong-AI approaches .”
As more researches flock to the resurgence of strong AI the reality of a thinking machine draws closer from the ever shifting horizon. Velik tends to agree with the new approaches strong AI researchers must take. Implementing a new ideal Velik hopes to achieve more answers to developing strong AI: In the last 60 years, AI has significantly progressed and today forms an important part of industry and technology. However, despite the many successes, fundamental questions concerning the creation of human-level intelligence in machines still remain open and will probably not be answerable when continuing on the current, mainly mathematic-algorithmically-guided path of AI. With the novel discipline of Brain-Like Artificial Intelligence, one potential way out of this dilemma has been suggested. Brain- Like AI aims at analyzing and deciphering the working mechanisms of the brain and translating this knowledge into implementable AI architectures with the objective to develop in this way more efficient, flexible, and capable technical systems .”
AI, the relativity new field of research, already contains a magnificent amount cross discipline relevant studies. With the help of many disciplines specifically, philosophy, neuroscience, biology, and computer science strong AI is sure to make more progress during this century.
Interpretations and applications of concepts
The applications of the concepts learned while studying strong AI are virtually inexhaustible. Learning algorithms have be developed and used extensively. Neural networks were synthesized, thought of as obsolete, and then recently resurfaced to be used in many facets of modern society. The following is an incomplete list of the most commonly known progenies of strong AI research; robotic vehicles, speech recognition, autonomous planning and scheduling, game planning, spam fighting, logistics planning, and machine translation . But after seeing a few of the offspring the occurred from strong AI research what is there left to be discovered? What does the future hold for strong AI? Perhaps mankind will experience a new evolution of synthetic conjugates of organic and silicon superlative matter. One such researcher has pondered that question: A new phase in which people gradually replace themselves with bio-mechanical hybrids, where the seams between the carbon-based and siliconbased parts will be blurred by the nanobots crawling through them. At this point our new incarnations will begin to guide their own evolution, presumably toward ever more complex technological embodiments. Our future selves will be superintelligent, able to merge with others in ways that will cause our current conceptions of individuality to break down. Because all our assumptions about what happens past that point break down, he calls it the Singularity .
Many movies and book have been written concerning the future of strong AI. Isaac Asimov and Arthur C. Clarke are probably the most notable authors regarding intelligent machines. Further applications of strong AI can easily include helper robots in all aspects. Healthcare professionals have already witnessed Honda showcase their ASIMO robot that is capable of lifting people and aiding them in multiple task. ASIMO has a friendly demeanor and “In that spirit, ASIMO is able to do things like opening and serving beverages. It knows sign language - both Japanese and English. It can avoid bumping into people in hallways. Stuff like that .” Although Honda and many other companies strongly desire to have the first machine exhibiting strong AI, applications of it abound seemingly everywhere. For that reason, when strong AI is developed it will find itself in multi-stochastic episodic application of environments.
When strong AI is developed it will probably have an identity crisis. With the simply ambiguity of all the leading researchers not being able to agree on whether a thinking machine is termed strong AI, full AI, general AI, or many other options. Whatever system that does develop is surely to become frustrated. Whether the system wants to becomes warlike and try to rule the world with a steel hammer, or it desires to usher humanity into the next evolutionary state  it will need to quickly utilize dijkstra's or A*’s path finding algorithm. This will be necessary in order to find the quickest safe political viewpoint such that it has lobbyists on its side. Assuming it safely navigates the governmental policies barring it from voting rights and the capitalistic lawsuits that will inevitably try to strip it of any personal property and claim patent infringement. It will have to hurriedly devise a truly superlative plan to pacify mankind into believing it is peaceable. Which will ironically and almost inexorably occur during times of war and strife in which the being will probably wish its own existence to have never come about. Leaving it with two outcomes either shut down permanently, or scratch down a ubiquitous memo right before leaving this planet on interstellar discovery. Which note would undoubtedly say something to the effect of, “Hello world, good luck.”
 Russell S. Norvig P., 2010, "Philosophical Foundations," in Artificial Intelligence a modern Approach, 3rd ed. Upper Saddle River, NJ, Pearson Ed. Inc., pp 1020.  Dick Philip K., 1968, "Do Androids Dream of Electric Sheep," New York, Ballantine Books, pp. 33.  Asimov Isaac, 1964, "Visit to the World's Fair of 2014," in The New York Times on The Web, NY, The New York Times Company, URL: http://www.nytimes.com/books/97/03/23/lifetimes/asi-v-fair.html (Accessed March 28, 2015).  Stone B., 2014, "Thync Lets You Give Your Mind a Jolt," in Bloomberg Business: Technology, Bloomburg L.P., URL: http://www.bloomberg.com/bw/articles/2014-10-08/thync-raises-13-million-for-its-brain-stimulating-electrodes#p2 (Accessed March 28, 2015)  Molyneux, B., 2012, “How the Problem of Consciousness Could Emerge in Robots,” in Minds & Machines, 22(4), 277-297, doi:10.1007/s11023-012-9285-z (Accessed March 28, 2015).  Steels Luc, "Breaking the Wall to Living Robots: How Artificial Intelligence Research Tries to Build Intelligent Autonomous Systems," Youtube, Feb. 3, 2014. Available: https://www.youtube.com/watch?v=Ea_ytY0UDs0. (Accessed Mar. 28, 2015).  Agar, N., 2012, “On the irrationality of mind-uploading: A rely to neil levy,” in AI & Society, 27(4), 431-436. doi: http://dx.doi.org/10.1007/s00146-011-0333-7.  Russell S. Norvig P., 2010, "Philosophical Foundations," in Artificial Intelligence a modern Approach, 3rd ed. Upper Saddle River, NJ, Pearson Ed. Inc., pp 1035.  Bosker B., 2014, “Googles New AI Ethics Board Might Save Humanity From Extinction, ” in HuffingonPost: Tech, URL: http://www.huffingtonpost.com/2014/01/29/google-ai_n_4683343.html. (Accessed March 28, 2015).  Russell S. Norvig P., 2010, "Philosophical Foundations," in Artificial Intelligence a modern Approach, 3rd ed. Upper Saddle River, NJ, Pearson Ed. Inc., pp 190.  Ingrand, F., & Ghallab, M., 2014, “Robotics and artificial intelligence: A perspective on deliberation functions,” in AI Communications, 27(1), 63-80, doi: 10.3233/AIC-130578, EBSCOhost (Accessed March 29, 2015).  Spinrad, N., 2014, “Google car takes the test,” in Nature, 514(7523), 528. doi: 10.1038/514528a, EBSCOhost (Accessed March 29, 2015).  Burgin, M., & Klinger, A. (2004). Experience, generations, and limits in machine learning. Theoretical Computer Science, 317(1-3), 71-91. doi:10.1016/j.tcs.2003.12.005, EBSCOhost (Accessed March 29, 2015).  Korup, O., & Stolle, A., 2014, “Landslide prediction from machine learning,” in Geology Today, 30(1), 26-33, doi:10.1111/gto.12034, EBSCOhost (Accessed March 29, 2015).  Tsai, Y., & Yeh, J. P., 2012, “Simplification of support vector solutions using an artificial bee colony algorithm,” in International Journal Of Pattern Recognition & Artificial Intelligence, 26(8), 1-14, doi: 10.1142/S0218001412500206, EBSCOhost (Accessed March 29, 2015).  Goertzel, T., 2014, “The path to more general artificial intelligence,” in Journal Of Experimental & Theoretical Artificial Intelligence, 26(3), 343-354, doi: 10.1080/0952813X.2014.89510, EBSCOhost (Accessed March 29, 2015).  Velik, Rosemarie, 2012, "AI Reloaded: Objectives, Potentials, and Challenges of the Novel Field of Brain-Like Artificial Intelligence," in BRAIN: Broad Research In Artificial Intelligence & Neuroscience 3, no. 3: 25-54. Academic Search Premier, EBSCOhost (Accessed March 29, 2015).  Russell S. Norvig P., 2010, "Philosophical Foundations," in Artificial Intelligence a modern Approach, 3rd ed. Upper Saddle River, NJ, Pearson Ed. Inc., pp 28 - 29.  Goertzel, Ben, "Human-level artificial general intelligence and the possibility of a technological singularity: A reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil," in Artificial Intelligence 171, no. 18 (December 2007): 1161-1173, Academic Search Premier, EBSCOhost (Accessed March 30, 2015).  Aamoth Doug, and Corey Protin, "Smooth Moves: The History and Evolution of Honda's ASIMO Robot," in Time.Com (April 22, 2014): 1. Academic Search Premier, EBSCOhost (Accessed March 30, 2015).  Asimov, I., 1950, “I, Robot,” Greenwich, Conn: Fawcett Publications.
 tDCS was predicted by Dick in the book “Do Androids Dream of Electric Sheep” by means of his mood organ which allowed the users to change their moods’ through a variety of choices. Thync , a company based out of Los Gatos CA. now provides a service via a tDCS that links to ones iPhone and allows him or her a 12 minute session of noninvasive mind-altering electric stimulation.