Blue Brain - A Massive Storage Space | April 2017 | Translation Journal

April 2017 Issue

Read, Comment and Enjoy!

Join Translation Journal

Click on the Subscribe button below to receive regular updates.

Subscribe

Blue Brain - A Massive Storage Space

Abstract—The human brain, the greatest creation of god, is a package of unimaginative functions. A man is intelligent because of his brain. ”Blue Brain” is world’s first virtual brain. Pink brain is special because it can think, respond, make decisions without any effort and keep everything in it’s memory. The aim of the project is to archive features of Pink brain to a Digital system. In short, ”Brain to a Digital System”. After the death of a human, data including intelligence, knowledge, personality, memory and feelings can be used for further development of society.

BB Storage space is an extracted concept from Blue brain project. Storing a variety of data in the memory is an advantage. By this concept, registers act as neurons and Electric signals as simulation impulses. Variation of data is identified based on signal variation that reaches the registers. Registers mentioned here are the same as a normal system maintains. A benefit of this concept of storage is the storing of data without deletion in real time, as a normal brain does. Attaching this to an expert system shows drastic changes at the time of the response. To make the system more understandable a regional Language conversion using Natural Language Processing (NLP) is included with a voice recognition technique. In a similar way, the system should also respond using the experiences that it has been through, in voice format. This is the output format of BB storage space. Nanobots on Blue brain, which are used for information collection from neurons, are replaced by real time experiences provided to the machine.

keywords- Blue Gene, Blue Brain Storage space(BB Storage space),Nanobots

I. INTRODUCTION

Blue Brain also named as ”Virtual Brain”, was the first intellectual brain which attempted reverse-engineering on the brain of mammals. This process helps to study the detailed stimulation and functionality of Pink brain. The project was initialized b Prof.Henry Markram from Brain and Mind Institute at EPFL (Ecole Polytechique Federale de Lausanne) in Lausanne, Switzerland. IBM collaborated on this project to provide technical assistance. Recreating one of the most valuable advantages of Pink brain - the Brain Uploading even after the death of a person, was a great challenge during the project. The aim of introducing Blue brain is to enable extremely fast development on recovery for Brain diseases. This has been a very useful tool for Short-term memory patients. They can remember their lost memories through this feature. Research is done on an actual living brain, slicing down through human brain tissues and applying Microscopes and Clamp electrodes for detailed study. Data from each neuron is collected. The real role of IBM is to create a neuron model for this project.

The collected data is used to create a biological-realistic model and network of neurons. Stimulation is performed using Blue Gene supercomputer built by IBM (shown in fig.1).

II. BLUE BRAIN

Blue brain was world’s first virtual brain. Blue Brain is focused exclusively on creating a physiological simulation for biomedical applications. That means a machine, which can function as a human brain. For this purpose we have to upload a human brain into the machine, so a machine can help a man to think and make a decision without any effort. After the death of the body, the virtual brain will act as the man. In the virtual brain we implement the reverse engineering process of the human brain. No one has ever understood the complexity of a human brain. It is more complex than any circuitry in the world. So, the question may arise ”Is it really possible to create a human brain?” The answer is ”Yes”. It is possible, a person may be uploaded into a computer. The uploading is possible by the use of small robots known as the Nanobots. These robots are small enough to travel throughout our circulatory system. Traveling into the spine and brain, they will be able to monitor the activity and structure of our central nervous system. They will be able to provide an interface with computers that is as close as our mind can be while we still reside in our biological form. Nanobots could also carefully scan the structure of our brain, providing a complete readout of the connections. This information, when entered into a computer, could then continue to function as us. Thus the data stored in the entire brain will be uploaded into the computer.

EPFL Brain and Mind Institute will begin simulating the brain’s biological systems and output the data as a working 3-dimensional model that will recreate the high-speed electro-chemical interactions that take place within the brain’s interior.

These include cognitive functions such as language, learning, perception and memory in addition to brain malfunction such as psychiatric disorders like depression and autism.

In a similar way an artificial nervous system can be created. Scientists have created artificial neurons by replacing them with the silicon chip. Tests have also been completed to show that these neurons can receive input from the sensory cells. So, the electric impulses from the sensory cells can be received through these artificial neurons. The interpretation of the electric impulses received by the artificial neuron can be completed by means of registers. The different values in these registers will represent different states of the brain. Based on the states of the register the output signal can be sent to the artificial neurons in the body, which will be received by the sensory cell. It is possible to store the data permanently by using the secondary memory. In a similar way the required states of the registers can be stored permanently and when required this information can be received and used.

The decision-making can be performed by the computer by using some stored states and the received input and performing some arithmetic and logical calculations. The computer chip, which is implanted into the brain, monitors brain activity and converts the intention of the user into computer commands. These applications may include novel communications interfaces for motor impaired patients, as well as the monitoring and treatment of certain diseases which manifest themselves in patterns of brain activity, such as epilepsy and depression.

Currently the chip uses 100 hair-thin electrodes that sense the electro-magnetic signature of neurons firing in specific areas of the brain, for example, the area that controls arm movement. The activities are translated into electrically charged signals and are then sent and decoded using a program.

The simulation step involves synthesizing virtual cells using the algorithms that were found to describe real neurons. The algorithms and parameters are adjusted for the age, species, and disease stage of the animal being simulated. Every single protein is simulated, and there are about a billion of these in one cell. First a network skeleton is built from all the different kinds of synthesized neurons. Then the cells are connected together according to the rules that have been found experimentally. Finally the neurons are functioned and the simulation brought to life. The patterns of emergent behavior are viewed with visualization software.

III. BLUE BRAIN AS EFFICIENT STORAGE SPACE

A benefit of Blue Brain is well-organized large storage and frequent responses. Based on this point, giving a brain to an expert system provides an extra advantage of storage area. The intelligence of a man is the result of his brain. Likewise, providing a brain with a system gives it the intelligence to respond in the form of voice, text and so on.

The concept of Blue brain storage space is an additional memory for a digital system other than a main memory and secondary one. A feature of BB storage space is brain uploading. Brain uploading mentioned here is collecting information from the memory using Machine Learning algorithms that works on real time. Data collected will be grouped into separate storage folders or Registers based on variations on data. This data categorization is done based on changes identified on input values. Binary values (1 and 0) are the only values which can be used to detect existence of data, decimal point variation on the bound values identifies data type variations. Incorporating technique of Natural Language Processing (NLP) makes project more lively and user friendly. Every user will be comfortable while they are interacting on a region language. Combination of both makes a huge hit on the field of Artificial Intelligence versus Natural Language Processing (NLP).

Blue Brain information Storage

 

Blue Brain vs BB Storage

 

IV. SIMULATION OF BLUE BRAIN VS BB STORAGE

Simulation of Blue brain over BB storage space is shown in fig.2.

V. WORKING OF BB STORAGE SPACE

Intelligence is a main contributor to making a society developed. Intelligence is an inborn quality that cannot be created. People with this quality can think and are able to extend by themselves. The brain and intelligence will be alive after the death of a person. Virtual brain helps people to relax by uploading themselves to the system. Inborn intelligence for a chat bot can be created by a programmer. Here, the programmer is the creator. More data storage makes the system more intelligent and accurate in its responses. Blue Brain, which is very similar to a living brain, has a large storage space. BB storage space is a concept based on large storage space or an extended idea of Blue brain.

BB storage space is not software exactly but a storage space used for accepting the speech to text format data in the form of bits. We recommend bits as storage medium to get fast execution and more data storage. This is performed on three levels and at final stage the storage is complete.

Flow of data on BB storage is shown in fig.3.
Levels on which the BB storage travel through is as follows:

Flowchart BB Storage

 

A. Input and Output Speech Recognition

Transmission of data to and from BB storage space is done through speech. A combination of Natural Language processing (NLP) and speech recognition strengthen this concept. Layer performs three actions:

Accepting input through speech

Converting speech to text and text to speech Speech output generation

Hidden-Markov models (HMM) are popular statistical models used to implement speech-recognition technologies. The process involves the conversion of acoustic speech into a set of words and is performed by software component. Accuracy of speech recognition systems differ in vocabulary size and confusable, speaker dependence vs. independence, modality of speech (isolated, discontinuous, or continuous speech, read or spontaneous speech), task and language constraints. Speech recognition systems can be divided into several blocks: feature extraction, acoustic models database which are  built based on the training data, dictionary, language model and the speech recognition algorithm. Dictionary is used to connect acoustic models with vocabulary words. Language model reduces the number of acceptable word combinations based on the rules of language and statistical information from different texts. Speech recognition systems, based on hidden Markov models are today most widely applied in modern technologies. They use the word or phoneme as a unit for modelling. The model output has hidden probabilistic functions of state and cannot be deterministically specified.

Continuous speech recognition, each state represents one phoneme. Under the concept of training we mean the determination of probabilities of transition from one state to another and probabilities of observations. Iterative Baum-Welch procedure is used for training . The process is repeated until a certain convergence criterion is reached, for example good accuracy in terms of small changes of estimated parameters, in two successive iterations. In continuous speech the procedure is performed for each word in complex HM model. Once states, observations and transition matrix for HMM are defined, the decoding (or recognition) can be performed. Decoding represents finding of most likely sequence of hidden states using Viterbi algorithm, according to the observed output sequence. It is defined by recursive relation. During the search, n-best word sequences are generated using acoustic models and a language model.

General flow structure HMM is shown in fig.4.

B. Text Analytics and AI

Text Analytics is the most recent name given to Natural Language Understanding, data and Text Mining. Previous layer generates a text format output on which the text mining needs to be performed. Text Analytics is an extension of data mining that finds pattern from unstructured sources. A technique called Sentiment analysis is a highlighted feature of Text analytics. Sentiment analytics categorizes the text depending on positive and negative perception of comments provided by user.

To understand the concept, take a look at the example. If a user inputs two different emotional statements such as ”I’m very happy today” and ”It’s a bad day”. The words ”happy” and “bad” present the positive and negative perceptions and the word very shows the frequency of emotions. On such various situations the system should respond accordingly. On other hand, if user inputs “I plan to go to Jaipur tomorrow with my wife”, then the system should respond, “What happens if you don’t go to Jaipur with your wife?”. The process of converting my to your is called as Transpositions. Transposition is a technique included in Deep Learning. The Deep learning concept of Text mining will be suitable for analysing such a situation. Deep learning is a widely used method for text mining and sentiment analysis. Deep learning works on two functions called Recent Neural Network(RNN)

Layers of Speech Recognition

 

and Convolution Neural Network(CNN). CNN is basically used for sentiment analysis. Layers of convolution over input layer to compute output and each layer applies different filters, that combines to a result is the strategy used in CNN.

The output of this layer is generated after Tokenization. Tokenization is a method of identifying words or tokens from the text. Tokenization is applied to the text generated as the output of Text mining. Formatted Text (Structured text) is used because Tokenization process will be easy.

C. Data on BB Storage Space

BB Storage Space is about storage on registers. Neurons on the brain are replaced by registers which act very similarly to neurons. Registers are selected for this because they have an advantage of storing data in the form of smaller bits. Data to these registers are carried out in real time. But before taking it into a real world, basic training is provided which will be permanently stored on the space. Basic training is done using Machine Learning algorithm. As we are focusing on speech, all the training set provided will be depending upon speech. Highlighting feature of BB storage space over other storage techniques is that, it only stores speech input data using small bits. This BB storage space is also capable of making decision

Flowchart Recognition on BB Storage Space

 

General Data Compression scheme

 

on its own to respond based on input comments of the user. This technique makes the system intelligent. Adding data which does not exist is an another feature of BB storage space, if already stored then it will respond using existing one. Storing in form of bits makes the execution faster.

Flowchart of data storage and retrieval to and from registers is shown in fig.5.

Storage of the data in registers can effectively be done using the algorithm lossless compression. Compression technique is a process of coding that will effectively reduce the total number of bits needed to represent certain information. General structure of compression is shown in fig.6. If the compression and decompression process induce no information loss, then the compression scheme is lossless. Otherwise, it is lossy. Many varieties of algorithms such as Huffman coding, extended Huffman coding perform similar actions. But the LZW compression and decompression algorithm is considered to be much more powerful than others. LZW algorithm is used for fixed length codewords to represent variable length strings of symbols or characters that commonly occur together.

D. Working of entire system

The user can input comments through speech/voice . The user is able to input through standard language (English) or input can be in any regional language. If the input is detected and accepted correctly based on the syntax of specified regional language an electric impulse is generated and transmitted to the registers. When impulses reach the registers, it checks whether token already exists in the registers. Tokens are not presented in registers, Text analytics are performed. Otherwise, response is completed based on experiences of system. Simple conversation of the user and system is as follows:

Message: Hello, How are you?

Response: Hi, I’m fine. How’s your day?

Message: The day was spectacular.

Response: Can you share what makes you so happy?

Likewise, the conversation continues. Based on this interaction, the user can understand that the system is fully intelligent. Because it has the capability to recognize the user was on positive form from the word spectacular. That is how the system requests to share user’s happiest feelings.

VI. FUTURE SCOPE

BB storage space is a concept focusing on storage where input and output is only provided through voice/speech. As a future extension of this concept we can include visual data as input. Visual data can be videos, pictures or even a real time action. Using the same decomposition algorithms we can split up the pictures and videos in to smaller bits which result in large amounts of data storage.

VII. CONCLUSION

BB storage space is an additional advantage provided for our existing system. This provides a larger amount of data stored in the memory with less effort. BB storage space for a chat bot makes it more intelligent and faster to respond based on the experiences provided.

REFERENCES

1. [1]  technology-report.com.category.artificial-intelligence.blue- brain-artificial-intelligence

2. [2]  cajalbbp.cesvima.upm.es

3. [3]  BluebrainEPFL

4. [4]  TheNEURONBook-NicholasT.Carnevale,MichaelL.Hines

Google Books

 

5. [5]  www.engpaper.net.blue-brain-project.htm

6. [6]  MarkramThebluebrainprojectGoogleScholar

7. [7]  BlueBrainCreatingVirtualHumanBraininsideaSuperComputer

Grip in IT

 

8. [8]  ieeexplore.ieee.org

9. [9]  ArtificialBrainsThequesttobuildsentientmachines

10. [10]  Infrastructure EPFL

11. [11]  Blue Brain Project Latest News , Videos, Photos about Blue

Brain Project The Economic Times

12. [12]  citeseerx.ist.psu.edu

13. [13]  Pink Brain, Blue Brain How Small Differences Grow

into Troublesome Gaps Lise Eliot - Google Books

 

14. [14]  Blue Brain Project scientists create a ’slice’ of digital BRAIN

tissue Daily Mail Online

 

15. [15]  www.vis.uky.edu

16. [16]  Henry Markram The Blue Brain Project (2008)

17. [17]  iit madras lectures about blue brain

18. [18]  ASC 2012 Prof. Idan Segev The blue brain

19. [19]  Blue Brain: Reversing Engineering the Human Brain to

Figure Out How it Works Through Simulation

20. [20]  bluebraintechnology1403857327.pdf

21. [21]  courses.cs.washington.edu

22. [22]  Brain-Computer Interfaces: An international assessment of

research and Theodore W. Berger, John K. Chapin, Greg A. Gerhardt, Dennis J. McFarland, Jose C. Principe, Walid V. Soussou, Dawn M. Taylor, Patrick A. Tresco - Google Books

 

23. [23]  blue brain project and research papers- www.engpaper.net.blue- brain-project.htm

24. [24]  Blue Brain TECHNOLOGY REPORT

[25] BlueBrainBluePyOpt Blue Brain Python Optimisation Library

[26] How The Brain Works: Blue Brain Project Creates Digital Slice Of Brain To Study Neurons

[27] www.cs.princeton.edu
[28] www.scribd.com
[29] Blue Brain Project www.artificialbrains.com.blue-brain-project
[30] people.eecs.berkeley.edu.klueska.cs267.hw0.
[31] Henry Markram, ”The Blue Brain Project”, Nature Reviews Neuro-

science.
[32] Blue Brain Power Modeling the brain with a supercomputer

Computer-world
[33] Lim Hong Swee,Texas Instruments Singapore (P E) Ltd

Implementing Speech-Recognition Algorithms
[34] S. Young, Large Vocabulary Continuous Speech Recognition: A Review,

IEEE Signal Processing Magazine, vol. 13, no. 5, pp. 4557, 199.
[35] research.ibm.com.bluebrain
[36] J. Baker, L. Deng, J. Glass, S. Khudanpur, Chin hui Lee, N.

Morgan, and D. OShaughnessy, Developments and directions in speech recognition and understanding, part 1, Signal Processing Magazine, IEEE, vol. 26, no. 3, pp. 7580, may 2009

[37] J. Tebelskis, Speech Recognition using Neural Networks, Pittsburgh: School of Computer Science,Carnegie Mellon University, 1995.

[38] www.artificialbrains.com.blue-brain-project
[39] research.pubs.SpeechProcessing
[40] thebeutifulbrain.com.2010.02.bluebrain-film-preview

Log in

Log in