Thursday, September 29, 2016

Computer program helps diagnose brain cancer

Having defeated humans at chess and Go!, computers also outperformed us on another sophisticated task: brain cancer diagnosis. This computer program, which was recently developed at Case Western Reserve University, could diagnose 12 out of 15 brain cancer patients correctly through analyzing their MRI scans. Meanwhile, for two other physicians that studied the same MRI scans, one got 8 and the other one got 7 right. Utilizing radiomic features, this program was nearly twice as accurate as two neuron-radiologists combined!

This progress is really important. Because MRI scans for radiation necrosis and recurrent brain cancer have almost indistinguishable patterns, physicians often have difficulty in differentiating them just by eyeballing the images. Treatments for these two ailments are also vastly different, so the quicker and more accurately we can identify the disease, the better the patients would be. 

So how does this program work? Researchers combined machine learning algorithms with 
radiomics, an emerging field that “aims to extract large amount of quantitative features from medical images sing data-characterization algorithms” (Wikipedia). Using sample MRI scans from numerous patients, scientists trained computers to recognize radiomic features that differentiate brain cancer from radiation necrosis. Then computer algorithms would help sort out the most discriminating radiomic features, or the subtle details that physicians often missed. 


For example, let's look at the two different MRI scans for tumor recurrence and radiation necrosis below. It’s quite difficult to notice the disparity between two scans and decide which one is which. The program’s images output, however, clearly display which one has less heterogeneity (shown in blue) which would indicate radiation necrosis; and which one has more heterogeneity (shown in red), which is representative of tumor recurrence.



Currently, the researchers are still trying to improve the program’s accuracy by using a much larger collection of images for the machine learning algorithms. In the future, this program would be a great tool for neuro-radiologists in inspecting suspicious lesions and diagnosing their patients.

Sources:
https://www.sciencedaily.com/releases/2016/09/160915132448.htm
https://en.wikipedia.org/wiki/Radiomics

Friday, September 23, 2016

Quantum Computing versus traditional computers




Compared to the bulky and huge computers that might take up a whole room a century ago, the portable and efficient laptops most college students carry around nowadays prove a remarkable progress. Yet, although the size and efficiency of computer have improved significantly, its functioning principle remained essentially the same. 

Conventional computers’ storage and processing processes are accomplished using witches called transistors. A transistor can only either be on or off: if on, it can store a number one (1), if it's off, it stores a number zero (0). As we had discussed in class, this binary digit (bit) system has some limitations. The more information the computer needs to store, the more binary ones and zeros—and transistors—it needs to handle. Since most conventional computers can only do one step at a time, there’s a finite amount of data that can be processed. Some complex algorithms thus “might require more computing power and time than any modern machine could reasonably supply” (ExplainThatStuff).

Theoretically, quantum computing could offer a solution to this problem and even build a whole new generation of computers. Instead of running on electrical circuits as ‘bits”, quantum computers will utilize “tiny particles that are magnetically suspended in extremely cold environment, called quantum bits or “qubits” (Science Alert). One particular advantage “qubits” have is that they can take on the value of 0, 1, both, or an infinite number of values in between; and store multiple values simultaneously. This parallel processing feature gives quantum computers much greater computing ability than traditional ones. Its performance in such tasks as processing massive calculations, rendering complex graphics animations, or cracking encryption by brute force, would be significantly faster. If successfully developed, quantum computers thus will be essential to some specific fields such as encryption or graphics.


Currently, quantum computing researchers faced many limitations that are often contingent on advances in superconductors, nanotechnology, and quantum electronics — equally complicated research fields. However, there have been many promising progress in quantum computing. For example, in 2000, “MIT professor Isaac Chuang used five fluorine atoms to make a crude, five-qubit quantum computer. Five years later, researchers at the University of Innsbruck produced the first quantum computer that could manipulate a qubyte (eight qubits)” (ExplainThatStuff).

Sources:
http://www.explainthatstuff.com/quantum-computing.html
http://www.sciencealert.com/watch-quantum-computing-explained-in-less-than-2-minutes
https://en.wikipedia.org/wiki/Quantum_computing

Friday, September 16, 2016

MyShake helps detect earthquakes

Earthquakes are costly not only in terms of infrastructures and assets, but also loss of lives. For example, the 1995 Kobe earthquake in Japan led to $131 billion worth of damage, and unfortunately more than 4000 deaths. Thus, one essential goal of seismologists has been developing more effective early-warning system to minimize the damage. Usually, the seismic networks worldwide can detect earthquakes and send data back to scientists. One limitation to this approach is that there are some areas where the network is thin, which impede seismologists from accurately and timely analyzing the situation.


App developers offers a solution with a recently developed app called MyShake that helps user detect earthquakes. This smartphone app would “pick up and interpret nearby quake activity, estimating the earthquake's location and magnitude in real-time, and then relaying the information to a central database for seismologists to analyze.”(LiveScience). Users carrying smartphones with them in earthquake zones can instantly share data with scientists, notifying them about the seismic activities. MyShake thus can fill the gaps caused by the imperfect seismic networks.

MyShake’s underlying principles is similar to fitness apps, as it also utilized the smartphone’s accelerometer, an instrument that detects changes in device’s orientation, accelerational forces, vibration, tilt and movement. So that it would not confuse everyday shakes with that of earthquakes, the app contrasts the vibrating motion against the signature amplitude and frequency content of earthquake’s shakes. 



After MyShake detects an earthquake, it would send an alert to a central processing site instantly. A network detection algorithm would then be activated by incoming data from multiple phones in the same area, to "declare" an earthquake, identify its location and estimate its magnitude (LiveScience). Although right now this app is limited to only collecting and transmitting data to the central processor, its end goal is to send warnings back to individual users. Yet, even without this feature, MyShake is already incredibly helpful to seismologists. The more data it can gather about earthquakes, the more scientists can improve their understanding of quake behavior, which would help them design better early warning systems and safety protocols (LiveScience). 

Works Cited: http://www.livescience.com/53703-earthquake-detecting-app-myshake.html
http://www.techgiri.com/appsforpc/this-myshake-android-app-can-detect-earthquakes-now-available-for-download/
http://kbctv.co.ke/blog/2016/04/15/deadly-earthquake-hits-in-southern-japan/

Friday, September 9, 2016

Beacon and its potential

Retails, brands and manufacturers are celebrating the advent of beacons - a location-based technology that broadcast signals using Bluetooth Low Engergy (BLE). Basically, beacon enabled apps are notified when the device enters or exists the range of a beacon (Lighthouse). Suppose Forever21 decides to implement beacon sensors in its store. Then whenever customers enter that beacon’s zone, apps installed on their smartphones can pick up the coupons, sales, special promotions, recommendations transmitted from the beacon. Consumers love coupons and sales, while retailers enjoy driving more clients to their doors. Clearly, beacons result in a win-win situation. 

As aforementioned, beacons used BLE. BLE is a wireless network technology designed for low energy consumption and cost, while maintaining a communication range similar to that of its predecessor, Classic Bluetooth (Lighthouse). These small wireless devices would broadcast packets of data at a regular interval, which would then be collected by smartphones. Smartphone apps would in turn decide how close the phone is from each of the beacons, and correspondingly trigger such actions as offering discounts, displaying alerts, opening doors, etc…. In short, beacons act as a broadcaster sending the data, while the iOS app is the receiver.

Unlike GPS, which might not give you signal when you walk inside a large building, beacons could determine your physical location indoors with accuracy. These small wireless devices might not only create a paradigm shift in advertising and shopping experience, but also be applied to tourism, education, entertainment industry and many other fields.


Sources: http://lighthouse.io/beginners-guide-to-beacons/
http://estimote.com/press-kit/

Friday, September 2, 2016

Artificial Intelligence (AI) assists people with severe disability





With virtual surgery, robots that aid in rehabilitation, or computer systems that analyze medical data, Computer Science (CS) proved to be an increasingly important part of medical advances. A recently developed innovation - brain-machine interface (BMI) technology - once again exemplifies CS’s versatile applications. BMI aims to help people with severe disability use mental commands to execute every day actions, such as sending emails or operating a TV.

Research on BMI has demonstrated positive progress. At University of Pittsburgh, scientists developed the technology that allows monkeys to mentally control a robotic arm to feed itself pieces of fruit. The electrical signals generated in the monkey’s brain when it thinks about the action would be recorded by tiny electrodes implanted in the motor cortex. A computer-decoding algorithm would translate the signals and trigger the arm’s movement (Science Daily). This brain-controlled device might hold the key to a better future for paralyzed patients, assisting them in performing simple tasks.

Recently, a robotic wheelchair that combines brain control with AI was also developed by researchers at the Federal Institute of Technology in Lausanne. This technology was a step forward from the Electroencephalography (EEG). For EEG, users need to wear a skullcap and constantly make mental commands to maneuver the wheelchair around, which could be pretty tiring. The newly developed robotic wheelchair, on the other hand, allows patients to think of the command only once, and then the software would take care of the rest (Technology Review). 

Such promising results in researching about BMI shows the future potential for this technology. If successful, these mentally-controlled devices and wheelchairs would revolutionize the lives of those who have lost their muscle control and mobility.

Works Cited

Autocorrect - Is your phone reading your mind?



Except for some mortifying and hilarious auto-correct fails that often circulate around the internet, the autocorrect feature on smartphones is generally useful in increasing our texting productivity, and correcting embarrassing misspelled words before we send out important emails. This handy tool, in some cases, can even analyze the context by determining the recipient of messages, and suggest the most suitable alternatives.

So, what is the algorithm behind autocorrect? Could we build a simple autocorrect program?

The basic principles of autocorrect is to have a comprehenisive dictionary of words and colloquialisms that are often used in modern context. Given a word, it would try different sorts of revisions, such as deletion, transposition (swap too adjacent letters), a replacement, or an insertion.  For a word of length n, there will be n deletions, n-1 transpositions, 26n alterations, and 26(n+1) insertions, for a total of 54n+25 (Norvig). Because this approach would result in such a big set, the algorithm then ruled out all non-sensical words, using the built-in dictionary.

However, how could the program tell if “lates” should be corrected to “late”, “latest” or “lattes” (Norvig) ? What’s its secret to finding the most suitable correction? This is where math and probabilities came in. Let w be the word the user typed in, and c as the correction. The algorithm would analyze both the probability that c appears as a word of English text (ex: “the probability of “the” occuring in English text would be 7%), and the probability that w would be typed in a text when the author meant c (ex: the probability would be high for “teh” and “the”, but low for “theeezyx” and “the”) (Norvig). Then it would try to determine the word that has the highest combined probability. I would not go too deep into the calculations in this blog, but here is a thorough and relatively easy-to-understand explanation if you guys want to dig deeper into this. 

That is the basics of a simple autocorrect program. Most current autocorrection systems—including on the iPhone, Android, BlackBerry – also incorporate some kinds of learning behavior, and is constantly improving to meet users' need. Nevertheless, this technology still has  a lot of room to grow. Hopefully in the future, the occurences of autocorrect mishaps would decrease, and the algorithm might be able to predict full phrases or sentences that the users intend to type.


            Works Cited
Content:
http://norvig.com/spell-correct.html
http://www.slate.com/articles/technology/technology/2010/07/yes_ill_matty_you.html
Graphics
pirateprerogative.com. Dear AutoCorrect
DailyMails.co.uk. The Funniest Autocorrect Fails Sweeping Webs.