We Believe Technocracy

Showing posts with label Article. Show all posts
Showing posts with label Article. Show all posts
Michigan State University engineering researchers have created a new way to harvest energy from human motion, using a film-like device that actually can be folded to create more power. With the low-cost device, known as a nano-generator, the scientists successfully operated an LCD touch screen, a bank of 20 LED lights and a flexible keyboard, all with a simple touching or pressing motion and without the aid of a battery.

The groundbreaking findings, published in the journal Nano Energy, suggest "we're on the path toward wearable devices powered by human motion," said Nelson Sepulveda, associate professor of electrical and computer engineering and lead investigator of the project.


"What I foresee, relatively soon, is the capability of not having to charge your cell phone for an entire week, for example, because that energy will be produced by your movement," said Sepulveda, whose research is funded by the National Science Foundation.
The innovative process starts with a silicone wafer, which is then fabricated with several layers, or thin sheets, of environmentally friendly substances including silver, polyimide and polypropylene ferro electret. Ions are added so that each layer in the device contains charged particles. Electrical energy is created when the device is compressed by human motion, or mechanical energy. The completed device is called a bio-compatible ferro electret nano-generator, or FENG. The device is as thin as a sheet of paper and can be adapted to many applications and sizes. The device used to power the LED lights was palm-sized, for example, while the device used to power the touch screen was as small as a finger.

Advantages such as being lightweight, flexible, bio-compatible, scalable, low-cost and robust could make FENG "a promising and alternative method in the field of mechanical-energy harvesting" for many autonomous electronics such as wireless headsets, cell phones and other touch-screen devices, the study says. Remarkably, the device also becomes more powerful when folded.
"Each time you fold it you are increasing exponentially the amount of voltage you are creating," Sepulveda said. "You can start with a large device, but when you fold it once, and again, and again, it's now much smaller and has more energy. Now it may be small enough to put in a specially made heel of your shoe so it creates power each time your heel strikes the ground."
Sepulveda and his team are developing technology that would transmit the power generated from the heel strike to, say, a wireless headset.


Author of this post :
Arunuday Ganju, Team member



“ Creativity knows no limits ”

This one phrase tells us all about the new and exciting blend of mobility and robust configurability HP ZBook 15 Mobile Workstation…….

It’s the perfect mix of style, features, and portability. No one really needs an introduction to NASA (National Aeronautics and Space Administration) or what it does? But there is one project involved that is running the ISS (International Space Station), which sustains the human crews conducting crucial experiments beyond the Earth’s atmosphere.

What is ISS?

ISS is a joint project with five space agencies, including NASA, Roskosmos of Russia, Japan Aerospace Exploration Agency, Canadian Space Agency, and the European Space Agency. The computers on the ISS are upgraded every six years, and this time they have selected HP Zbooks.

You ask why??

Well, then-The Z Workstations are literally everything that the ISS mission depends on- tied-in to ISS life-support, vehicle control, maintenance, and operations systems. They provide mission support for onboard experiments, email, entertainment, and more. In short-everything the astronauts on the ISS need to survive and communicate and research with.

Why they choose HP Zbooks?


NASA chose the HP ZBook 15 Mobile Workstations because of its performance and reliability that HP builds into their mobile workstation solutions. With advanced capabilities that are typically not available in notebooks, like 3D graphics, powerful processors, and massive memory, crew members can be a lot more efficient—even the boot-up timings are scrutinized when an astronaut’s time is worth more than $100,000 an hour.


DESCRIPTION:

Conquer the professional space with the perfect combination of brains and beauty. The iconic 15.6" diagonal HP ZBook Studio is HP’s thinnest, lightest, and most attractive full performance mobile workstation. With dual 1 TB HP Z Turbo Drive G2 of total storage, 32 GB ECC memory, and optional HP Dream Color UHD or FHD touch displays, this is a killer from HP’s stable.


Work confidently with HP mobile workstations designed around 30 years of HP Z DNA with extensive ISV certification. The all new ZBook Studio G3 is reliable, designed to pass MIL-STD 810G testing, and has endured 120K hours of testing in HP's Total Test Process.

For the techies, reduce boot up, calculation, and response time because the trend setting HP ZBook handles large files with optional dual 1 TB HP Z Turbo Drive G storage for a remarkably fast and innovative solution. Quickly and easily transfer data and connect to devices. This mobile workstation is packed with multiple ports including dual Thunderbolt 3 ports, HDMI, and more.
“Thus we can say a HP ZBook makes your statement bold and performance better.”
More info at hp.com


Author of this post :
Dhruvi Rajput, Team member

University of Washington researchers have taken a first step in showing how humans can interact with virtual realities via direct brain stimulation. In a paper published online Nov. 16 in Frontiers in Robotics and AI, they describe the first demonstration of humans playing a simple, two-dimensional computer game using only input from direct brain stimulation -- without relying on any usual sensory cues from sight, hearing or touch.

The subjects had to navigate 21 different mazes, with two choices to move forward or down based on whether they sensed a visual stimulation artifact called a phosphine, which are perceived as blobs or bars of light. To signal which direction to move, the researchers generated a phosphine through transcranial magnetic stimulation, a well-known technique that uses a magnetic coil placed near the skull to directly and noninvasively stimulate a specific area of the brain.

"The way virtual reality is done these days is through displays, headsets and goggles, but ultimately your brain is what creates your reality," said senior author Rajesh Rao, UW professor of Computer Science & Engineering and director of the Center for Sensorimotor Neural Engineering.
"The fundamental question we wanted to answer was: Can the brain make use of artificial information that it's never seen before that is delivered directly to the brain to navigate a virtual world or do useful tasks without other sensory input? And the answer is yes."
The five test subjects made the right moves in the mazes 92 percent of the time when they received the input via direct brain stimulation, compared to 15 percent of the time when they lacked that guidance.

The simple game demonstrates one way that novel information from artificial sensors or computer-generated virtual worlds can be successfully encoded and delivered noninvasively to the human brain to solve useful tasks. It employs a technology commonly used in neuroscience to study how the brain works -- transcranial magnetic stimulation -- to instead convey actionable information to the brain. The test subjects also got better at the navigation task over time, suggesting that they were able to learn to better detect the artificial stimuli.
"We're essentially trying to give humans a sixth sense," said lead author Darby Losey, a 2016 UW graduate in computer science and neurobiology who now works as a staff researcher for the Institute for Learning & Brain Sciences (I-LABS). "So much effort in this field of neural engineering has focused on decoding information from the brain. We're interested in how you can encode information into the brain."
The initial experiment used binary information -- whether a phosphine was present or not -- to let the game players know whether there was an obstacle in front of them in the maze. In the real world, even that type of simple input could help blind or visually impaired individuals navigate.

Theoretically, any of a variety of sensors on a person's body -- from cameras to infrared, ultrasound, or laser rangefinders -- could convey information about what is surrounding or approaching the person in the real world to a direct brain stimulator that gives that person useful input to guide their actions.
"The technology is not there yet -- the tool we use to stimulate the brain is a bulky piece of equipment that you wouldn't carry around with you," said co-author Andrea Stocco, a UW assistant professor of psychology and I-LABS research scientist. "But eventually we might be able to replace the hardware with something that's amenable to real world applications."


Author of this post :
Abhijit Chopra, Team member

The Bluetooth Special Interest Group (SIG) officially adopted Bluetooth 5 as the latest version of the Bluetooth core specification this week.

Key updates to Bluetooth 5 include longer range, faster speed, and larger broadcast message capacity, as well as improved interoperability and coexistence with other wireless technologies. Bluetooth 5 continues to advance the Internet of Things (IoT) experience by enabling simple and effortless interactions across the vast range of connected devices.

"Bluetooth is revolutionizing how people experience the IoT. Bluetooth 5 continues to drive this revolution by delivering reliable IoT connections and mobilizing the adoption of beacons, which in turn will decrease connection barriers and enable a seamless IoT experience." Mark Powell, Executive Director of the Bluetooth SIG
Key feature updates include four times range, two times speed, and eight times broadcast message capacity. Longer range powers whole home and building coverage, for more robust and reliable connections. Higher speed enables more responsive, high-performance devices. Increased broadcast message size increases the data sent for improved and more context relevant solutions.

Bluetooth 5 also includes updates that help reduce potential interference with other wireless technologies to ensure Bluetooth devices can coexist within the increasingly complex global IoT environment. Bluetooth 5 delivers all of this while maintaining its low-energy functionality and flexibility for developers to meet the needs of their device or application.
Consumers can expect to see products built with Bluetooth 5 within two to six months of today’s release.

Source: Bluetooth.com

Author of this post :
Abhishek Jain, Co-Founder


Speech recognition systems, such as those that convert speech to text on cellphones, are generally the result of machine learning. A computer pores through thousands or even millions of audio files and their transcriptions, and learns which acoustic features correspond to which typed words. But transcribing recordings is costly, time-consuming work, which has limited speech recognition to a small subset of languages spoken in wealthy nations.

At the Neural Information Processing Systems conference this week, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) are presenting a new approach to training speech-recognition systems that doesn't depend on transcription. Instead, their system analyzes correspondences between images and spoken descriptions of those images, as captured in a large collection of audio recordings. The system then learns which acoustic features of the recordings correlate with which image characteristics.


"The goal of this work is to try to get the machine to learn language more like the way humans do," says Jim Glass, a senior research scientist at CSAIL and a co-author on the paper describing the new system.
The current methods that people use to train up speech recognizers are very supervised. You get an utterance, and you're told what's said. And you do this for a large body of data. "Big advances have been made -- Siri, Google -- but it's expensive to get those annotations, and people have thus focused on, really, the major languages of the world. There are 7,000 languages, and I think less than 2 percent have ASR [automatic speech recognition] capability, and probably nothing is going to be done to address the others. So if you're trying to think about how technology can be beneficial for society at large, it's interesting to think about what we need to do to change the current situation. And the approach we've been taking through the years is looking at what we can learn with less supervision." Joining Glass on the paper are first author David Harwath, a graduate student in electrical engineering and computer science (EECS) at MIT; and Antonio Torralba, an EECS professor.

Conversely, text terms associated with similar clusters of images, such as, say, "storm" and "clouds," could be inferred to have related meanings. Because the system in some sense learns words' meanings -- the images associated with them -- and not just their sounds, it has a wider range of potential applications than a standard speech recognition system. To test their system, the researchers used a database of 1,000 images, each of which had a recording of a free-form verbal description associated with it. They would feed their system one of the recordings and ask it to retrieve the 10 images that best matched it. That set of 10 images would contain the correct one 31 percent of the time.

"I always emphasize that we're just taking baby steps here and have a long way to go," Glass says. "But it's an encouraging start."
The researchers trained their system on images from a huge database built by Torralba; Aude Oliva, a principal research scientist at CSAIL; and their students. Through Amazon's Mechanical Turk crowdsourcing site, they hired people to describe the images verbally, using whatever phrasing came to mind, for about 10 to 20 seconds. For an initial demonstration of the researchers' approach, that kind of tailored data was necessary to ensure good results. But the ultimate aim is to train the system using digital video, with minimal human involvement.
"I think this will extrapolate naturally to video," Glass says.
To build their system, the researchers used neural networks, machine-learning systems that approximately mimic the structure of the brain. Neural networks are composed of processing nodes that, like individual neurons, are capable of only very simple computations but are connected to each other in dense networks. Data is fed to a network's input nodes, which modify it and feed it to other nodes, which modify it and feed it to still other nodes, and so on. When a neural network is being trained, it constantly modifies the operations executed by its nodes in order to improve its performance on a specified task. The researchers' network is, in effect, two separate networks: one that takes images as input and one that takes spectrograms, which represent audio signals as changes of amplitude, over time, in their component frequencies. The output of the top layer of each network is a 1,024-dimensional vector -- a sequence of 1,024 numbers.

The final node in the network takes the dot product of the two vectors. That is, it multiplies the corresponding terms in the vectors together and adds them all up to produce a single number. During training, the networks had to try to maximize the dot product when the audio signal corresponded to an image and minimize it when it didn't. For every spectrogram that the researchers' system analyzes, it can identify the points at which the dot-product peaks. In experiments, those peaks reliably picked out words that provided accurate image labels -- "baseball," for instance, in a photo of a baseball pitcher in action, or "grassy" and "field" for an image of a grassy field.

In ongoing work, the researchers have refined the system so that it can pick out spectrograms of individual words and identify just those regions of an image that correspond to them.

Source: Materials provided by Massachusetts Institute of Technology. Original written by Larry Hardesty.


Author of this post :
Abhijit Chopra, Team member


WHY DRONES?

Drones could be the next major tech revolution to sweep the world, and these robotic flying machines are now being used for purposes that extend far beyond the secretive realm of the military. A new automated, flying ambulance completed its first solo flight, offering a potential solution for challenging search and rescue missions.


Completing such missions in rough terrain or combat zones can be tricky, with helicopters currently offering the best transportation option in most cases. But these vehicles need clear areas to land, and in the case of war zones, helicopters tend to attract enemy fire. Earlier this month, Israeli company Urban Aeronautics completed a test flight for a robotic flying vehicle that could one day go where helicopters can't.

On November 14, the company flew its robotic flyer, dubbed the Cormorant, on the craft's first solo flight over real terrain. The autonomous vehicle is designed to eventually carry people or equipment (as reflected in its former name, the AirMule) without a human pilot on board. Urban Aeronautics said the test was "a significant achievement for a student pilot, human or nonhuman," and said the company is "proud" of the vehicle's performance.

The Cormorant uses ducted fans rather than propellers or rotors to fly. These fans are effectively shielded rotors, which means the aircraft doesn't need to worry about bumping into a wall and damaging the rotors. Another set of fans propels the vehicle forward, according to Urban Aeronautics.

The robotic flyer pilots itself entirely through laser altimeters, radar and sensors. The system is "smart" enough to self-correct when it makes mistakes, company officials said. In a video released by Urban Aeronautics, the Cormorant tries to land, stops itself and then corrects its landing position.

WHY THEY MADE IT?? 

The vehicle is effectively a decision-making system that can figure out what to do if the inputs from the sensors are off in some way, the company said. If the Cormorant detects a potential issue, the drone's robotic brain can decide what to do: go home, land and wait for more instructions, or try a different flight path, Urban Aeronautics said.


Despite the completion of this month's flight test, Urban Aeronautics still needs to refine some parts of the technology, the company said. For one, the test flight wasn't very long, lasting only a minute or two. And though the terrain was irregular (as in, not completely flat), it was still an open field without any real obstacles on either side. Further tests will look to improve how smoothly the aircraft goes from take off to level flight, and to increase speed and maneuverability, the company said in a statement.


Author of this post :
Dhruvi Rajput, Team member


A traditional computer uses long strings of “bits,” which encode either a zero or a one. A quantum computer, on the other hand, uses quantum bits, or qubits. What's the difference? Well a qubit is a quantum system that encodes the zero and the one into two distinguishable quantum states. But, because qubits behave quantumly, we can capitalize on the phenomena of "superposition" and "entanglement."


Defining the Quantum Computer :

  • The Turing machine, developed by Alan Turing in the 1930s, is a theoretical device that consists of tape of unlimited length that is divided into little squares. Each square can either hold a symbol (1 or 0) or be left blank. A read-write device reads these symbols and blanks, which gives the machine its instructions to perform a certain program. Well, in a quantum Turing machine, the difference is that the tape exists in a quantum state, as does the read-write head. This means that the symbols on the tape can be either 0 or 1 or a superposition of 0 and 1; in other words the symbols are both 0 and 1 (and all points in between) at the same time. While a normal Turing machine can only perform one calculation at a time, a quantum Turing machine can perform many calculations at once.

  • Today's computers, like a Turing machine, work by manipulating bits that exist in one of two states: a 0 or a 1. Quantum computers aren't limited to two states; they encode information as quantum bits, or qubits, which can exist in superposition. Qubits represent atoms, ions, photons or electrons and their respective control devices that are working together to act as computer memory and a processor. Because a quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today's most powerful supercomputers.


What is super position ?

Superposition is essentially the ability of a quantum system to be in multiple states at the same time — that is, something can be “here” and “there,” or “up” and “down” at the same time.

APPLICATIONS

  1. FASTER AND ACCURATE COMPUTING 

    • Just like a nuclear clock quantum computers have a very low margin of errors. As superposition is the foundation stone of quantum computing ,it allows simultaneous solution search for a particular problem definition and finds the best possible solution as per the resources available.

  2. OPTIMIZATION 

    • Imagine you are building a house, and have a list of things you want to have in your house, but you can’t afford everything on your list because you are constrained by a budget. What you really want to work out is the combination of items which gives you the best value for your money.

    • Typically, these are very hard problems to solve because of the huge number of possible combinations. With just 270 on/off switches, there are more possible combinations than atoms in the universe!

  3. RADIOTHERAPY OPTIMIZATION 

    • Problems like optimizing cancer radiotherapy, where a patient is treated by injecting several radiation beams into the patient intersecting at the tumor, illustrates how the two systems can work together.

    • The goal when devising a radiation plan is to minimize the collateral damage to the surrounding tissue and body parts – a very complicated optimization problem with thousands of variables. Using the quantum computer with an HPC system will allow faster convergence on an optimal design than is attainable by using HPC alone.

  4. PROTEIN FOLDING

    • Simulating the folding of proteins could lead to a radical transformation of our understanding of complex biological systems and our ability to design powerful new drugs.

    • With an astronomical number of possible structural arrangements, protein folding in an enormously complex computational problem. Scientific research indicates that nature optimizes the amino acid sequences to create the most stable protein - which correlates well to the search for the lowest energy solutions.


Author of this post :
Arunuday Ganju, Team member





Apple announced a lots of stuff at WWDC 2016 Keynote.

Apple's announcements today were spread across all four of its platforms: watchOS (Apple Watch), tvOS (Apple TV), OS X (now known as macOS), and iOS (iPhone/iPads).

iOS 10 is our biggest release yet, with incredible features in Messages and an all-new design for Maps, Photos, and Apple Music. With macOS Sierra, Siri makes its debut on your desktop and Apple Pay comes to the web. The latest watchOS offers easier navigation and a big boost in performance. And the updated tvOS brings expanded Siri searches.

Lets Start WWDC 2016 !!!

watchOS

watchOS 3, shipping this fall but available to developers today, packs a bunch of new tricks:

– Instant loading of apps, doing away with the sluggish launches people have complained about since day 1.

– “Scribble” allows you to reply to a message by sloppily writing with your fingertip.

– New faces (Including one just for physical activity that brings your steps taken, calories burned, etc to the front).

– Like to run one watch face while you work but another at night? You can now let multiple faces run at once and just swipe between them.

– A new “Dock” interface that lets you quickly jump between apps that are running.

– Better support for users in wheelchairs.

tvOS


AppleTV’s tvOS is picking up some new stuff, too:

– “Dark mode”, a darker theme for night time viewing (or if you just don’t like AppleTV’s crazy bright look).

– If you download a video app to your iPhone and that app has an AppleTV version, it’ll auto download to your AppleTV.

– Most major tv channels have their own AppleTV app and, until now, you’ve had to give each and every one the credentials to your cable account. That’s a lot of typing. “Single Sign On” handles that automatically now.

– You can now use Siri voice search to find content within third party apps, like YouTube.

- Your iPhone Is A Good AppleTV Remote Again. It replicates the remote’s touchpad, allowing you to navigate AppleTV’s UI. The accelerometer/gyroscope can be used as a game controller. You can use your iPhone’s keyboard to type on AppleTV again!

Developer preview launches today, and ships to everyone “this Fall”.

macOS Sierra

OS X Gets A Rebranding. Well, it wasn’t like the others. After 15 years as OS X, Apple’s Mac OS is now just.. macOS.

The first release of (the operating system now known as) macOS will be called macOS Sierra. Like pretty much everything Apple announced today, Sierra will ship this Fall.

- Auto Unlock When You're Near. Your Mac can automatically unlock when it detects you’re sitting in front of it with your iPhone or Apple watch.

- Copy/Paste Between All Of Your Devices. Copy something on your computer, and immediately paste it on your iPhone or iPad (or vice versa).

-Automatically free up hard drive space. macOS can free up storage by automatically backing up and removing local copies of files you haven’t used in a long time things like old iPhone backups, redundant Mail data, and your browsing cache.

- Want to watch Netflix but need to keep working? A new PIP mode lets you pop a Safari video out to a little chromeless box that can be dragged around without taking up any more screen space than it needs.

- As rumored for months and months, Siri is now a part of macOS. You can ask Siri basic stuff, like movie times or weather predictions. But you can also ask more complicated things — like “Show me files that Ken sent me, but just the ones I’ve tagged ‘draft".

DETAIL REVIEW : Apple macOS sierra

iOS 10

It wouldn't be WWDC without a new major release of iOS. While WWDC often tends to focus mostly on new behind-the-scenes developer stuff that users don’t necessarily see directly, Craig Federighi called iOS 10 “the biggest iOS release for users ever".

- The New Lock Screen. 3D Touch support on notifications. You can do things like: Deep press on an iMessage to see more messages from that thread for context. TouchID has just gotten too damned fast. Nine times out of 10, a quick tap of the home button unlocks the whole phone. Now, if you just want to see your notifications, you won’t need to press any buttons at all.

- Widgets Behind App Icons. Looking to make both 3D Touch and iOS widgets more useful, you can now access an app’s widget by 3D touching its icon.

- Siri Integration For Third Party Developers. Developers will now be able to hook into third party apps, enabling system-wide voice commands.

- QuickType Gets Smarter. QuickType — the buttons above the iOS keyboard that try to predict what you’re going to say — is getting a lot smarter by way of context.

- VoIP Integrated Right Into The Phone Dialer. VoIP-capable apps like Skype and WhatsApp can now hook right into the Phone application. No more crappy “John is calling! Open Skype!” notifications — it’ll can look just like a standard incoming call.

- Maps Gets A Refresh. Beyond an overall interface refresh, iOS’ built-in Maps app is picking up some tricks from Spotlight’s “Proactive” AI — namely, Maps will be able to dig into your calendars and location history to offer up directions to where it thinks you’re heading.

- Apple Music Gets A Cleaner Look. Pretty much right off the bat, Apple Music was bashed for being… a bit confusing. It was pretty, sure — but at first glance, it was a bit of a mess. Now it’s pretty AND easier to just pick up and use.

- All Of Your HomeKit Stuff In One Place. “Home” is an aggregator app, bringing all of these HomeKit devices — smart lights, connected garage door openers, etc — into one hub. You can set up “scenes” like “Night Mode” that lock your smartlocks and kill the lights, or geofenced triggers that open your garage door and turn on the AC when you get home.

- Messages Gets A Massive Refresh. When linking to things like images or YouTube, the Messages will show inline previews of those items instead of just an ugly link. Type something, switch to the emoji keyboard, and any word that has a correlating emoji will turn orange. Tap that orange word, and bam! Emoji-fied. Invisible Ink: Garbled messages that only appear when you swipe across them. Why? Because sometimes you want a message to have a bit of anticipation/surprise factor… or you want to know a message wont appear on screen until your recipient’s eyes (and not those of random snoops) are looking.

The developer preview lands today, a public beta will ship in July, but the public, built-for-everyone build will ship... you guessed it: this Fall.

DETAIL REVIEW : Apple iOS 10

And One More Thing...

Swift Playgrounds

Apple launched an iPad app called "Swift Playground" that aims to teach users how to code in Swift using simple, user-friendly lessons in a custom/lightweight development environment.



It launches this fall.

And that's WWDC 2016! See you next year!



Author of this post :
Abhishek Jain, Co-Founder







CES, also known as the Consumer Electronics Show, is an internationally renowned electronics and technology trade show, attracting major companies and industry professionals worldwide. The annual show is held each January at the Las Vegas Convention Center in Las Vegas, Nevada, United States. Not open to the public, the Consumer Technology Association - sponsored show typically hosts previews of products and new product announcements.

The first CES was held in June 1967 in New York City. It was a spinoff from the Chicago Music Show, which until then had served as the main event for exhibiting consumer electronics. The event had 17,500 attendees and over 100 exhibitors; the kickoff speaker was Motorola chairman Bob Galvin. From 1978 to 1994, CES was held twice each year: once in January in Las Vegas known as Winter Consumer Electronics Show (WCES) and once in June in Chicago, known as Summer Consumer Electronics Show (SCES).

IT is particular a function or a festival held in LAS VEGAS where entry is only on invitation. The invitations are given to companies worldwide. And the companies in here also presents their upcoming products or also launch their products at the CES.




Author of this post :
Abhishek Jain, Co-Founder



The Domain name server also known as  "DNS" are the servers or say machines that finds a particular "IP-ADDRESS" from the domain name. In other words, Domain Name Servers  (DNS) are the Internet's equivalent of a phone book. They maintain a directory of  domain names   and translate them to Internet Protocol (IP) addresses. This is necessary because, although domain names are easy for people to remember, computers or machines, access websites based on IP addresses.

For better understanding I am giving you a Example below that will show you what is a domain name and how it is connect to the IP-ADDRESS.

EXAMPLE :- www.google.com

Above here is a link that would throw you to the Google Web Page. Here "google.com" is the "domain name" and ".com" is the top-level domain name.  So this domain name would have a unique "IP-ADDRESS" on the internet and the DNS will search the internet for a IP-ADDRESS have the specified Domain Name.

Keep in mind that every single "domain name" have a unique "IP-ADDRESS" or say Internet Protocol (IP) Address.

Further Information on DNS Visit: Wikipedia/DNS


Author of this post :
Abhijit Chopra, Team member


USB Type-C is a description of the port connection itself. It’s small, compact, and replaces the standard USB Type-A and B connections as well as the myriad of micro and mini USB ports. Basically, it’s one USB connection type to rule them all. And best of all, it’s reversible, so the days of flipping your USB cable three times before inserting it correctly may finally be numbered. Over the next few years, look for USB Type-C to begin becoming the universal port for all devices including desktop, laptop, and mobile.


One thing to note, because announcements of Type-C connections have come hand in hand with USB 3.1, many people assume they’re the same, or at the very least that all Type-C runs on the 3.1 spec. This is not the case. Remember, Type-C is the connection type and may actually run on a lesser spec – USB 2.0 even – so don’t assume you’ll be getting all that 3.1 goodness just because you see that tiny reversible port.

USB 3.1 (aka USB 3.1/gen 2) is the successor to USB 3.0. Identifiable by its bright turquois port, USB 3.1 doubles the transfer speed of 3.0 to a whopping 10 Gbps. USB Power Delivery 2.0 makes a big step forward as well with up to 100W of power. And like previous versions of USB, it is fully backwards compatible with its predecessors.

When used with the Type-C connection, things get really interesting for 3.1. The 100W of PD v2.0 is enough to power and charge full sized notebooks, which means the proprietary AC port may soon be replaced by this universal alternative. With 4 data lanes, USB 3.1 Type-C can even carry DisplayPort and HDMI video signals, further adding to its ubiquity. Again, one port to rule them all.

USB Type-C promises to solve this problem with a universal connector that’s also capable of twice the theoretical throughput of USB 3.0 can provide far more power. That’s why Apple is pairing up Type-C and USB 3.1 to eliminate the power connector on the MacBook. It’s a goal we agree with, even if we’re less thrilled with the company’s decision to dump USB ports altogether with that single exception. Google’s approach, in providing two USB-C and two regular USB 3.0 ports, is obviously preferable, even though it adds a bit of bulk to the machine.


Author of this post :
Abhijit Chopra, Team member


Gone are the days where videos use to be just flat and non realistic. World second largest search engine, after its parent company Google, YouTube has recently announced the support of 3D stereoscopic videos to its already existing 360° videos. This allow user to see a video in any direction and not just where the camera is pointing.

Now with the addition of 3D inside 360°, our virtual world just got more real. Remember those punch flying out of the screen and that close-to-real encounter with rain falling while sitting on your theatre seat. This 3D effect really spiced up classic styles of movie presentation. So adding this amazement to all new 360° view, will result in more breath taking virtual reality experience.


This VR (Virtual Reality) can be experienced right away if you have VR devices like Oculus Rift, Samsung Gear VR or Google Cardboard. Now with upgrade, user can see immersive three dimensional video as they swivel around to change their view. Wondering ‘how they are shot?’ then here is the answer. These videos are shot with the cameras pointing in the different directions and then stitched by using special software.

There is an increasing library of 360° videos on YouTube. Once opened, they can be viewed in any direction either by panning videos in your chrome browser using WASD keys or by moving your phone on YouTube mobile app. There are music videos from well known artists, F1 rides, horror episodes, view of an airport are all available in 360° view. Also recent upload of by Bud Light, is the first ad in 360°.This support has opened a whole new dimension to moviemakers and storytellers. By exploiting this they get nothing but their audiences, enthralled.

Before we use to see movies just as audiences, but now we will be able to experience movies right from the eyes of the characters in it. You may end up falling from the aeroplane in a wingman suit while chilling in your living room or you might just visit moon just from your desk and not moving an inch!

VR can’t get much closer than this.

At Rework Deep Learning Summit in Boston, Google research scientist Kevin Murphy unveiled a project that uses sophisticated deep learning algorithms to analyze a still photo of food, and estimate how many calories are on the plate. It's called Im2Calories.



Like many deep learning applications, it marries visual analysis in this case, determining the depth of each pixel in an image with pattern recognition. Im2Calories can draw connections between what a given piece of food looks like, and vast amounts of available caloric data.

For example if Im2Calories spots a burger, it's because the pixels in the image resemble those in existing shots of burgers, not because a researcher held the system's hand, so to speak, during various practice runs.


Author of this post :
Abhijit Chopra, Team member


Facebook’s artificial intelligence team has been working these days on a new experimental facial recognition algorithm that recognizes the people even when their faces aren’t clear (most part covered, looking away etc.). Instead, it looks for other characteristics to recognize the person. For example– it could now recognize you using your hairstyle, clothes, posture, body etc. Awesome but creepy- right?

Yann LeCun, head of artificial intelligence at Facebook challenged himself and his team-mates to create a new algorithm that could adapt itself according to the situations when Facebook is unable to see someone’s face clearly. So, this artificial intelligence works just like humans as we can recognize our friends even when their face is covered.



The Facebook AI team took about 40,000 pictures from Flickr with some pictures in which people’s face weren’t fully visible and processed them with the help of a sophisticated neural network. The final algorithm designed was able to recognize the people with covered faces with an excellent 83% accuracy which is better than other facial recognition technologies that make use of clear and uncovered faces.

Facebook has recently launched Moments, a new photo sharing app for mobile and this could be effectively used to power it.
“There are a lot of cues we use. People have characteristic aspects, even if you look at them from the back,” says LeCun. “For example, you can recognise Mark Zuckerberg very easily, because he always wears a gray T-shirt.”
But the privacy implications too won’t be far once this sci-fi tech is launched as the people who don’t want to look at camera would surely raise questions regarding privacy concerns.

So good luck – now you can’t hide from the human-like perfection of Facebook’s facial recognition.


Author of this post :
Abhishek Jain, Co-Founder


Apple’s Worldwide Developers Conference (WWDC) was everything rumoured, teased and then some. If you didn't sit through the keynote, here’s a run-down of the highlights you shouldn't miss.

OS X 10.11: El Capitan

New features headed with new OSX include new gestures, such as shake to enlarge mouse cursor, and a new way to pin websites on Safari. The browser will also add a speaker icon on the URL bar to mute music coming from any tab opened.


Spotlight is also getting more contextual. Instead of looking for files by name, you can now describe what you’re looking for, such as “Files I worked on last June” to bring up documents last edited at that time.

El Capitan will also allow users to split their screen into multi-tasking windows (Windows 10 style) by holding the Maximize button and dragging left or right.

Lastly, OS X 10.11 is getting the Metal 3D graphics SDK to improve gaming and apps on its desktop. Apple says this should accelerate app launches by 1.4 times, and make it twice as fast to switch between apps.

El Capitan is available with a public beta coming in July and a full, free roll-out to all this fall.

iOS 9

The core improvement Apple wanted to make on iOS 9 this go-round was adding intelligence to the operating system. Federighi started the segment off with Siri’s upcoming improvements.


New “context sensitive” features include the ability to tell Siri to “remind me about this” and it will know you are referring to the webpage currently on Safari. If you receive a phone number but are not sure who’s calling, you can also ask Siri to search through your emails to find any matches.

Siri can also suggest people to invite for meetings, or apps that you might like your usage behavior during particular times of day.

Apple also unveiled an API for search to help developers deep link their apps from mobile Spotlight searches. All of your searches and suggestions are not linked to your Apple ID or shared with third parties.

New improvements are also coming to apps like Notes and Maps, indexing links on the former and transit information on the latter. When you look up businesses on Maps, it will also give you info on whether or not they accept Apple Pay, because of course.


Additionally, Apple announced a new app for iOS 9 called News to personalize news content, updating anytime the user opens the app. The Flipboard-like app include graphics that adapt based on the news source’s site aesthetics, and allow users to browse publishers for top stories.

News will roll out first to the US, UK and Australia.

Some updates coming to the iPad: New gestures are being added such as using two fingers to tap on the keyboard to turn it into a touchpad. This is helpful for when you want to drag text during an email, for example.

Multi-window support is also coming to the iPad, much like OS X El Capitan. You can also simultaneously scroll on both screens… if that’s your thing.
There’s a new slide-over view as well, so you can drag in another app from the side for a quick glance while you’re using another app in fullscreen. This could be useful for when you’re watching a video and want to quickly check your Twitter feed, for example. You can even switch between apps and have the video playing picture-in-picture.

WatchOS

Before talking about the new version of the watchOS, Apple touted its latest App Store numbers: It has recently surpassed 100 billion downloads and paid out $30 billion to developers.

On to watchOS: New watch faces are headed to watchOS 2, such as a Photo Album face to rotate through your pictures in the background (e-picture frame style). Developers can also make their own “complications” by showing data for anything you want right on the watch face, from flight times to sports scores.


Users can rotate their digital crown to “Time Travel” and see ‘future’ data, such as appointments later in the day and what the weather will be like at that time.

Other updates include the ability to respond to emails, watch videos and make FaceTime audio calls straight from the Apple Watch. You can also tell Siri to start the Workout app without ever touching the watch. The aforementioned Wallet and live transit data via Maps are headed to Watch, too.

Apple also touched upon the new WatchKit support recently teased, such as the ability for Watch to work with known Wi-Fi networks rather than via a connected iPhone.

HomeKit will also be natively on the watch, so you can control your home temperature from the digital crown, for example. Taptic engine and accelerator information will be made available for developers as well.

Apple Music

Apple officially confirmed Apple Music, its streaming service powered by iTunes. Users can search their content on “My Music,” or find song recommendations in the “For You” tab. It’ll also show you the song that’s coming up next to prepare your ear drums.


To power “For You,” Apple Music will ask about your musical preferences to source songs, artists and playlists. You can also browse “Hot Tracks,” “Recent Releases,” “Top Charts” and even ad-free music videos.

Siri will also work with contextually with Apple Music when you prompt for things like “Play me the top songs from 1982″ or “Play that song from ‘Selma.


Beats1 is a 24/7 global radio station, hosted by former BBC Radio One DJ Zane Lowe. The service hopes to not only play great music, but help users discover new content. Artists can upload their work to Apple Music Connect to help increase their exposure, regardless of whether they’re signed.

Apple Music will launch on June 30 for $9.99 a month. A family plan will also be available for $14.99 a month for up to six members. Android support is coming in the fall.

Thats All At WWDC 2015.

Source1         Source2


Author of this post :
Abhishek Jain, Co-Founder



Mobile link sharing is a clumsy rigmarole of app switching and copy & pasting. So to get more people posting links, Facebook is testing an in-app keyword search engine that lets you find websites and articles to add to your status updates. Alongside buttons to add photos or locations, some iOS users are seeing a new “Add A Link” option. Just punch in a query, and Facebook will show a list of matching links you might want to share, allow you to preview what’s on those sites, and let you tap one to add it to your status with a caption or share statement. Results seem to be sorted by what users are most likely to share, highlighting recently published sites that have been posted by lots of people.


Facebook confirming to TechCrunch that “We’re piloting a new way to add a link that’s been shared on Facebook to your posts and comments.”

 The trial is only available to a small group in the US, but there’s a lot of potential here. Facebook tells that it indexed over one trillion posts to let people search for links that have been shared with them. That means it’s search engine powered by data Google doesn’t have. If rolled out to all users, it would let them avoid Googling or digging through Facebook’s News Feed to find a link to share. The “Add A Link” button could get users sharing more news and other publisher-made content. Not only does that fill the News Feed with posts that Facebook can put ads next to. It also gives it structured data about what kind of news and publishers you care about, as well as the interests of your friends depending on if they click or Like your story.



Facebook has become the juggernaut of referral traffic, delivering almost 25% of all social clicks in late 2014, compared to just 0.88% for Twitter. Keeping that stat high goads publishers to push their content to Facebook, expend resources to format it as appealingly as possible, and buy ads to amplify its reach. This all benefits Facebook. A boost from the Add A Link button could make publishers even more dependent on the social network.

The feature might also feed in nicely to Facebook’s hosted content project. Several outlets have reported on Facebook’s plans to imbibe content from publishers such as news articles, and display them natively in the News Feed rather than making users click out and wait for the slow mobile webpages to load. In exchange for enriching its feed and making it even tougher to look away from, Facebook plans to split ad revenue with the publishers.

The Wall Street Journal says the program could launch soon, and Facebook might give publishers 100% of ad revenue if they sell what’s shown next to the articles. If Facebook sells the ads, it might get 30%. The scheme would keep users on Facebook when they consume content rather than popping into browser windows where they might be more likely to end their Facebook session. Similarly, Add A Link could keep users on Facebook when they go to share content too. The garden’s walls grow ever taller.

Source       Image Credit


Author of this post :
Abhishek Jain, Co-Founder