Business

Virtual Reality Box

Virtual Reality Box-

A virtual reality headset is a head-mounted device that provides virtual reality for the wearer. VR headsets are widely used with computer games but they are also used in other applications, including simulators and trainers. They comprise a stereoscopic head-mounted display (providing separate images for each eye), stereo sound, and head motion tracking sensors (which may include gyroscopes, accelerometers, structured light systems, etc.). Some VR headsets also have eye tracking sensors and gaming controllers.

Because virtual reality headsets stretch a single display across a wide field of view (up to 110 for some devices according to manufacturers), the magnification factor makes flaws in display technology much more apparent. One issue is the so-called screen-door effect, where the gaps between rows and columns of pixels become visible, kind of like looking through a screen door. This was especially noticeable in earlier prototypes and development kits, which had lower resolutions than the retail versions.

The lenses of the headset are responsible for mapping the up-close display to a wide field of view, while also providing a more comfortable distant point of focus. One challenge with this is providing consistency of focus: because eyes are free to turn within the headset, it’s important to avoid having to refocus to prevent eye strain.

Virtual reality headsets are being currently used as a means to train medical students for surgery. It allows them to perform essential procedures in a virtual, controlled environment. Students perform surgeries on virtual patients, which allows them to acquire the skills needed to perform surgeries on real patients. It also allows the students to revisit the surgeries from the perspective of the lead surgeon.
Traditionally, students had to participate in surgeries and often they would miss essential parts. Now, with the use of VR headsets, students can watch surgical procedures from the perspective of the lead surgeon without missing essential parts. Students can also pause, rewind, and fast forward surgeries. They also can perfect their techniques in a real-time simulation in a risk free environment.
Latency requirements
Virtual reality headsets have significantly higher requirements for latency the time it takes from a change in input to have a visual effect than ordinary video games. If the system is too sluggish to react to head movement, then it can cause the user to experience virtual reality sickness, a kind of motion sickness. According to a Valve engineer, the ideal latency would be 7-15 milliseconds. A major component of this latency is the refresh rate of the display, which has driven the adoption of displays with a refresh rate from 90 Hz (Oculus Rift and HTC Vive) to 120 Hz (PlayStation VR).
The graphics processing unit (GPU) also needs to be more powerful to render frames more frequently. Oculus cited the limited processing power of Xbox One and PlayStation 4 as the reason why they are targeting the PC gaming market with their first devices.

Asynchronous reprojection /time warp
A common way to reduce the perceived latency or compensate for a lower frame rate, is to take an (older) rendered frame and morph it according to the most recent head tracking data just before presenting the image on the screens. This is called asynchronous reprojection or “asynchronous time warp” in Oculus jargon.

PlayStation VR synthesizes “in-between frames” in such manner, so games that render at 60 fps natively result in 120 updates per second. SteamVR (HTC Vive) will also use “interleaved reprojection” for games that cannot keep up with its 90 Hz refresh rate, dropping down to 45 fps.

The simplest technique is applying only projection transformation to the images for each eye (simulating rotation of the eye). The downsides are that this approach cannot take into account the translation (changes in position) of the head. And the rotation can only happen around the axis of the eyeball, instead of the neck, which is the true axis for head rotation. When applied multiple times to a single frame, this causes “positional judder”, because position is not updated with every frame.

A more complex technique is positional time warp, which uses pixel depth information from the Z-buffer to morph the scene into a different perspective. This produces other artifacts because it has no information about faces that are hidden due to occlusion and cannot compensate for position-dependent effects like reflectons and specular lighting. While it gets rid of the positional judder, judder still presents itself in animations, as timewarped frames are effectively frozen.

WHAT IS AUGMENTED REALITY

Augmented Reality was first achieved, to some extent, by a cinematographer called Morton Heilig in 1957. He invented the Sensorama which delivered visuals, sounds, vibration and smell to the viewer. Of course, it wasnt computer controlled but it was the first example of an attempt at adding additional data to an experience. Wikipedia describes?Augmented Reality ?as a live direct or indirect view of a physical, real-world environment whose elements are Augmented” by computer-generated or extracted real-world sensory input such as sound, video, graphics or GPS data.

Augmented reality is actually in simple words can be explained as adding some content in the real world which is actually not present there. Augmented reality is actually creating or adding a virtual world/ things over a real world. It brings 3D content to your eyes in the real world by using any medium like phone camera or web cams.

The first properly functioning AR system was probably the one developed at USAF Armstrongs Research Lab by Louis Rosenberg in 1992. This was called Virtual Fixtures and was an incredibly complex robotic system which was designed to compensate for the lack of high-speed 3D graphics processing power in the early 90s. It enabled the overlay of sensory information on a workspace to improve human productivity.

The best and most relevant example of app popularly known as Pokmon Go. Those who have played that that game knows what that game is. That game is actually creates virtual characters augmented in the actual world. The basic concept of that game is to catch pokmon as you open the app you see a different world in the same world. It just takes the real world as a base and shows augmented /virtual reality effects.

There are some popular apps other than Pokmon go if you want to take some good experience of virtual reality

  1. Ink hunter
  2. Augment
  3. Holo
  4. Sun Seeker
  5. Aurasma
  6. Quiver

 

Use of augmented reality can be done in different fields of study and practical use as:

  1. Education

AR would also be a way for parents and teachers to achieve their goals for modern education, which might include providing a more individualized and flexible learning, making closer connections between what is taught at school and the real world, and helping students to become more engaged in their own learning.

  1. Medical

AR provides surgeons with patient monitoring data in the style of a fighter pilot’s heads-up display, and allows patient imaging records, including functional videos, to be accessed and overlaid.

  1. Military

In combat, AR can serve as a networked communication system that renders useful battlefield data onto a soldier’s goggles in real time. Virtual maps and 360 view camera imaging can also be rendered to aid a soldier’s navigation and battlefield perspective, and this can be transmitted to military leaders at a remote command center from the soldier’s viewpoint, people and various objects can be marked with special indicators to warn of potential dangers.

  1. Video Games

A number of games were developed like Pokmon go and others. The gaming industry embraced AR technology in the best way possible for normal people.

And Much more

Future of Augmented?Reality

Experts predict the AR market could be worth 122 billion by 2024. So this report by BBC tells us that augmented reality has very big market as the development goes on and on.

Laravel – best PHP framework

Laravel is one of the highly used, open-source modern web application frameworks that designs customized web applications quickly and easily.

Developers prefer Laravel over to other frameworks because of the performance, features, scalability it provides. It follows Model View Controller (MVC) which makes it more useful than PHP. It attempts to take the pain out of development by easing common tasks used in the majority of web projects, such as authentication, routing, sessions and caching. It has a unique architecture, where it is possible for developers to create their own infrastructure that is specifically designed for their application. Laravel is used not only for the large projects but also best to use for the small project.

Laravels first beta release was made available on June 9, 2011, followed by the Laravel 1 release later in the same month.

Features of Laravel:

  1. Modularity: Modularity is defined as the degree to which a systems components get separated and then recombines. You split the business logic into different parts, which belongs together.
  2. Authentication: Authentication is the most important part of any web application and developers spent enormous time writing the authentication code which has become simpler with the update in Laravel 5.
  3. Application Logic: It can be implemented within any application either using controllers or directly into route declarations using syntax similar to the Sinatra framework. Laravel is designed with privileges giving a developer the flexibility that they need to create everything from very small sites to massive enterprise applications.
  4. Caching: Caching is a temporary data storage used to store data for a while and can be retrieved quickly. It is often used to reduce the times we need to access database or other remote services. It can be a wonderful tool to keep your application fast and responsive.
  5. Method or Dependency Injection: In Laravel Inversion of control (IoC) container is a powerful tool for managing class dependencies. Dependency injection is a method of removing hard-coded class dependencies. Laravels IoC container is one of the most used Laravel features.
  6. Routing: With Laravel, we can easily approach to routing. The route can be triggered in the application with good flexibility and control to match the URL.
  7. Restful Controllers: Restful controllers provide an optional way for separating the logic behind serving HTTP GET and POST requests.
  8. Testing & Debugging: Laravel is built with testing in mind, in Fact, support for testing with PHPUnit is included out of the box.
  9. Automatic Pagination: Simplifies the task of implementing pagination, replacing the usual manual implementation approaches with automated methods integrated into Laravel.
  10. Template Engine: Blade is a simple, yet powerful templating engine provided with Laravel. Unlike controller layouts, Blade is driven by template inheritance and sections.
  11. Database Query Builder: Laravels database query builder provides a convenient, fluent interface to creating and running database queries.

Multi-factor authentication (MFA)

Multi-factor authentication(MFA)

Multi-factor authentication is a method of computer access control in which user granted access only after successfully presenting several separate pieces of evidence for authentication mechanism- typically at least two of the following categories: knowledge(something they know), possession(something they have), and inherence(something they are).

Two-factor authentication

It is a combination of two different components.
Two-factor authentication is a type of multi-factor authentication.

A good example from everyday life is the withdrawing of money from an ATM; only the correct combination of the bank card (something that the user possesses) and a PIN (personal identification number, something that the user knows) allows the transaction to be carried out.

 

The authentication factors of a multi-factor authentication scheme may include:

  • Some physical object in the possession of the user, such as a USB stick with a secret token, a bank card, a key, etc.
  • Some secret known to the user, such as a password, PIN,TAN, etc.
  • Some physical characteristic of the user (biometrics), such as a fingerprint, eye iris, voice, typing speed, pattern in key press intervals, etc.

 

Knowledge factors

Knowledge factors are the most commonly used form of authentication. In this form, the user is required to prove knowledge of a secret in order to authenticate.

A password is a secret word or string of characters that is used for user authentication. This is the most commonly used mechanism of authentication. Many multi-factor authentication techniques rely on password as one factor of authentication.Variations include both longer ones formed from multiple words (a passphrase) and the shorter, purely numeric ,personal identification number (PIN) commonly used for ATM?access. Traditionally, passwords are expected to be memorized.

Possession factors

Possession factors (“something only the user has”) have been used for authentication for centuries, in the form of a key to a lock. The basic principle is that the key embodies a secret which is shared between the lock and the key, and the same principle underlies possession factor authentication in computer systems. A security token is an example of a possession factor.

Disconnected tokens

Disconnected tokens have no connections to the client computer. They typically use a built-in screen to display the generated authentication data, which is manually typed in by the user.

Connected tokens

Connected tokens are devices that are physically connected to the computer to be used. Those devices transmit data automatically.There are a number of different types, including card readers, wireless tags and USB tokens.

Inherence factors

These are factors associated with the user, and are usually bio-metric methods, including fingerprint readers, retina scanners or voice recognition.

 

On-screen fingerprint sensor

First on-screen fingerprint sensor –

The world’s first phone with a fingerprint scanner built into the display was as awesome as I hoped it would be.

There’s no home button breaking up your screen space, and no fumbling for a reader on the phone’s back. I simply pressed my index finger on the phone screen in the place where the home button would be. The screen registered my digit, then spun up a spiderweb of blue light in a pattern that instantly brings computer circuits to mind. I was in.

Such a simple, elegant harbinger of things to come: a home button that appears only when you need it and then gets out of the way.

How in-display fingerprint readers work

In fact, the fingerprint sensor — made by sensor company Synaptics — lives beneath the 6-inch OLED display. That’s the “screen” you’re actually looking at beneath the cover glass.

When your fingertip hits the target, the sensor array turns on the display to light your finger, and only your finger. The image of your print makes its way to an optical image sensor beneath the display.

It’s then run through an AI processor that’s trained to recognize 300 different characteristics of your digit, like how close the ridges of your fingers are. It’s a different kind of technology than what most readers use in today’s phones.

Because the new technology costs more to make, it’ll hit premium phones first before eventually making its way down the spectrum as the parts become more plentiful and cheaper to make.

Vivo’s phone is the first one we’ve gotten to see with the tech in real life.

Vivo’s been working on putting a fingerprint sensor underneath the screen for the last couple of years, and now it’s finally made one that’s ready for production.

The company had already announced last year it had developed the “in-display fingerprint scanning” technology for a prototype phone. That version used an ultra-sonic sensor and was created with support from Qualcomm.

The new version of the finger-scanning tech is optical-based and was developed with Synaptics. In a nutshell, how the technology works is the phone’s OLED display panel emits light to illuminate your fingerprint. Your lit-up fingerprint is then reflected into an in-display fingerprint sensor and authenticated.

It’s really nerdy stuff? all you really need to know is that phones with fingerprint sensors on the front are back again. And this time, without thick bezels above and below the screen.

 

 

 

Should e-sports come to Olympics ?

Should e-sports come to Olympics ?

For those who have been spending hours or rather wasting hours sitting in front of a screen all day and explaining the world how e-sports will make them famous one day are definitely in for a disappointment. That being said, let us also acknowledge that people are earning in the world of e-sports and if reports are to be believed e-sports earnings are at an all time high. Twelve of the e-sports are paying as much as $1 million as prize amount. This makes us wonder if e-sports should already be welcomed into Olympics. It was being speculated for quite some time that the growth of e-sports would make it go all the way to Olympics sooner than later. However, recently International Olympic Committee President Thomas Bach suggested otherwise and received huge rebuke from video game fanatics all over the world.

A career already

E-sports are already becoming a prime time profession, with lead players of the most popular games earning tons of money. These sports are also gaining massive support as well as sponsorship by top companies like BMW. It is evident that youth is and will be attracted to it. Why not make it official already by introducing it in Olympics?

Physical fitness

Apart from the fact that the spirit of Olympics have always been about physical strength, mortar reflexes and fitness, it is worth noting that e-sports makes a person extremely lazy, in most cases obese and sometimes causes injury to hands and fingers of those who play them for prolonged period of time. This is absolutely against the spirit of Olympics.

More viewers

More viewers and more sponsorship would be attracted to Olympics if e-sports could be given a place. It would eventually lead to better prize money to athletes.

Violence

Even if e-sports would make it to Olympics, fans would be disappointed as the most popularly played video games would not be played. They are always violent, including explosions and killing.
It would be better if e-sports would rather have a separate event of their own like e-Olympics instead of mixing it up with the existing culture of sports at Olympics.

Net Neutrality

netneutrality_fbimage

Net Neutrality-
It is the principle that Internet Service Provider must treat all data on the Internet the same, and not discriminate or charge differently by user, content, website, platform, application, type of attached equipment, or method of communication. For instance, under these principles, internet service providers are unable to intentionally block, slow down or charge money for specific websites and online content.

History-
The term was coined by professor Tim Wu in 2003, which was used to describe the role of telephone systems.
An example of a violation of net neutrality principles was the Internet service provider Comcast’s secret slowing (“throttling”) of uploads from peer-to-peer file sharing (P2P) applications by using forged packets. Comcast did not stop blocking these protocols, like BitTorrent, until the Federal Communications Commission ordered them to stop. In another minor example, The Madison River Communications company was fined US$15,000 by the FCC, in 2004, for restricting their customers’ access to Vonage, which was rivaling their own services. AT&T was also caught limiting access to FaceTime, so only those users who paid for AT&T’s new shared data plans could access the application. In July 2017, Verizon Wireless was accused of throttling after users noticed that videos played on Netflix and YouTube were slower than usual, though Verizon commented that it was conducting “network testing” and that net neutrality rules permit “reasonable network management practices”.

Open Internet

Under an “open Internet” schema, the full resources of the Internet and means to operate on it should be easily accessible to all individuals, companies, and organizations.
Applicable concepts include: net neutrality, open standards, transparency, lack of Internet Censorship, and low barriers of entry. The concept of the open Internet is sometimes expressed as an expectation of decentralized technological power, and is seen by some observers as closely related to open-source software, a type of software program whose maker allows users access to the code that runs the program, so that users can improve the software or fix bugs.
Proponents of net neutrality see this as an important component of an “open Internet”, wherein policies such as equal treatment of data and open web standards allow those using the Internet to easily communicate, and conduct business and activities without interference from a third party.
In contrast, a “closed Internet” refers to the opposite situation, wherein established persons, corporations, or governments favor certain uses, restrict access to necessary web standards artificially degrade some services, or explicitly filter out content. Some countries block certain websites or types of sites, and monitor and/or censor Internet use using Internet Speed, a specialized type of law enforcement, or secret police.

Traffic shaping

Traffic shaping is the control of computer network traffic to optimize or guarantee performance, improve latency (i.e., decrease Internet response times), and/or increase usable bandwidth by delaying packet that meet certain criteria. In practice, traffic shaping is often accomplished by “throtting” certain types of data, such as streaming video or P2P file sharing. More specifically, traffic shaping is any action on a set of packets (often called a stream or a flow) which imposes additional delay on those packets such that they conform to some predetermined constraint (a contract or traffic profile). Traffic shaping provides a means to control the volume of traffic being sent into a network in a specified period (bandwidth throttling), or the maximum rate at which the traffic is sent (rate), or more complex criteria such as generic cell rate algorithm

Legal enforcement of net neutrality principles takes a variety of forms, from provisions that outlaw anti-competitive blocking and “throttling” of Internet services, all the way to legal enforcement that prevents companies from subsidizing Internet use on particular sites. Contrary to popular rhetoric and statements by various individuals involved in the ongoing academic debate, research suggests that a single policy instrument (such as a no-blocking policy or a quality of service ?policy) cannot achieve the range of valued political and economic objectives central to the debate. As Bauer and Obar suggest, “safeguarding multiple goals requires a combination of instruments that will likely involve government and non-government measures. Furthermore, promoting goals such as the freedom of speech, political participation, investment, and innovation calls for complementary policies.”.

PRATYUSH

On Monday 08-01-18,
India unveiled its fastest supercomputer ‘Pratyush’ which is an array of computers that can deliver a peak power of 6.8 petaflops. One petaflop is a million billion floating point operations per second and is a reflection of the computing capacity of a system.
According to reports of Indian Institute of Tropical Meteorology (IITM), Pratyush is the fourth fastest supercomputer in the world which is designed for weather and climate research. It will also upgrade an Indian supercomputer from the 300s to the 30s in the Top500 list, a respected international tracker of the worlds fastest supercomputers.
The government had sanctioned last year 400 crore in order to put in place a 10-petaflop machine. The main functionality of this supercomputer would be monsoon forecasting with the help of a dynamic model. This requires simulating the weather for a given month and letting a custom-built model calculate how the actual weather will play out over June, July, August, and September. This new system would provide wings to the technology and it would be possible to map regions in India at a resolution of 3 km and the globe at 12 km.
The machines will be installed at two government institutes: 4.0 petaflops HPC facility at IITM, Pune; and 2.8 petaflops facility at the National Centre for Medium-Range Weather Forecast, Noida.
The sole purpose of installing such a high-capacity supercomputer in India is to accelerate the weather forecasting in the country, primarily before the arrival of Monsoon season in India. In addition, Pratyush will monitor the onset of other natural calamities such as floods and Tsunami in the country. As a matter of fact, farmers will get a big relief as the unprecedented rainy season in India often results in a bad annual crop production.
This increase in supercomputing power will go a long way in delivering various societal applications committed by MoES. This will also give a fillip to research activities not only in MoES but also in other academic institutions working on various problems related to Earth Sciences, said IITM in its release.

 

INDIA’S OTHER SUPERCOMPUTERSWith Pratyush, India makes its way into the list of top 30 supercomputers in the world. As of June 2017, following systems of India were on the list of top 500 supercomputing systems:

  • SahasraT (SERC – Cray XC40) installed at Indian Institute of Science (ranked 165)
  • Aaditya (iDataPlex DX360M4) installed at Indian Institute of Tropical Meteorology (ranked 260)
  • TIFR – Cray XC30 installed at Tata Institute of Fundamental Research (ranked 355)
  • HP Apollo 6000 Xl230/250 installed at Indian Institute of Technology Delhi (ranked 391)

Why Magento for E-commerce?

The worlds biggest brands love Magento for its flexibility because todays consumers and their buying patterns are changing by the minute. And they always want the best and convenient services. It offers Online merchants with a flexible shopping cart system, as well as control over the look, content, and functionality of their online store. It is also SEO friendly so your website can attract users and convert them to your long-term customers. We believe that Magento is one of the best e-commerce platforms available today, with editions ranging from community open source to massive, large-scale enterprise SaaS-based systems.

Shops with only a few products and simple can easily expand to tens of thousands of products and complex custom behavior without changing platforms. It offers a variety of plug-ins and themes which can easily enhance a customer’s experience. Extensions allow you to add custom features and functionality to every area of your Magento store including the front and back end, integrations with other web services, marketing tools, themes and more. They are developed through a broad network of Magento partners to give you the flexibility and power to maintain your store the way you want. Custom functionality can be enhanced by using more complex programming. There are a number of reasons why developers are called upon to adjust a Magento website as it requires really complex programming for custom functionality. It is a very robust system. Today no one wants to have to wait for systems to reload when you’re doing a lot of online shopping.

Magento, being so prevalent on the internet will be under constant attack from hackers. However, it also has a big user base so developers find and patch any security holes as soon as they are found. This is clearly good news, but only if every merchant using Magento is able to apply security patches as soon as they become available. Sites which install Magento and then leave it without constant attention to updates will be putting themselves at risk. This provides another revenue stream for hosting providers who can constantly update Magento installations and charge for the service. If you are looking for a multi-lingual website, Magento can be the right choice. Moreover, everything like room-in features, multiple product images, categorized display of products, special discount offers, multi-tier pricing system, etc. can be managed from a single admin panel. There are several reasons why more and more people use Magento for Web Development. There are several add-on modules and extensions which can be used for your e-commerce website. Magento has a clear admin code and hence, it offers an excellent user interface. The backend elements are well organized and make the website look attractive and appealing. This platform is constantly improved. You can easily download the updates via admin panel and make the changes to your Magento e-shop.

Magento not only handles your online store effortlessly but also helps you with promotion, marketing, and conversion. It offers numerous tools to make your advertising easier. These tools include –

  • Cross-sell products
  • Promotional pricing restricted to selected products or categories
  • Option to distribute coupon codes across email, newsletter and offline.
  • Monitor coupon usage, manage newsletters and polls
  • Offers free shipping and promote new products list.
  • Allows price variation based on quantity and groups
  • Landing page tools for PPC, new product promotional tools, URL tools, and more

Magento is better in every way and it has so many other features such as mobile friendliness, customer service, and international support, tracking, analytics, and reporting.

3D Touch

Force touch technology was not efficient in this upgrading world to resolve this problem 3D touch was introduced. Taking the Force Touch technology to an altogether new level, Apple launched 3D Touch on iPhone 6S and 6S Plus. Being more sensitive than Force Touch, 3D Touch has been developed to work using capacitive sensors integrated into the display.
Working-

In functionality, 3D Touch is really smart. It allows you to carry out certain tasks instantly through quick actions. It is generally essential for performing the tasks we intend to use most often. You don’t need to launch an app to carry out actions. As for instance, if you want to take a selfie, you don’t need to launch the Camera app. Simply light press on the Camera app, you get the option to Take Selfie right on your Homescreen.
Functionality for Peek and Pop-
In order to master 3D Touch, you need to understand Peek and Pop. While the former refers to a light press, the later is a hard press.
If you want to peek at a message, you just need to press it lightly. And, if you wish to pop into the message for a full view, you need to press it a little more deeply. That’s how it works!
Differentiating between Force Touch and 3D Touch-
Force Touch is smart enough to detect the pressure applied on the screen. It can detect not just multiple touches on the screen but can also calculate the difference in pressure on various points of the screen.
However, while reacting to your touch, Force Touch is not as fast as 3D Touch. The lightning fast response of the 3D Touch is because of the fusion of capacitive sensors and strain gauges. This fusion is perfected by the Taptic Engine.
The Future for Touch Technology-

Knowing the significance of how much 3D Touch has been appreciated by iPhone 6s users; touch technology is going to get a lot better in future. With the amazing ability to let iPhone owners use their device with more convenience and faster, it is here to stay.
Android smartphone makers are also working to provide 3D touch on their phones, currently, the Nougat version based smartphones are providing 3D Touch.

Request a Free Estimate
Enter Your Information below and we will get back to you with an estimate within few hours
1