Friday, 19 August 2016

Cisco planning a shed off of 5,500 jobs Not the rumored 14,000



So the economic downturn is still causing damages in most blue chip companies. The rumored news of Cisco planning to lay off a whopping 14000 of its employees is seriously not a pleasant news to behold. So according to a tech news site, CRN, they rumored that Cisco is making plans to reduce headcount in a few weeks.

This latest news is really huge, considering that Intel corporation had earlier announced of its plans to do same, in equally large scale. Well the 'rumor' has it that Cisco cited reason for it as a result of global economic effects on enterprise business low patronage of its Routers and Switches which constitute the bulk of what Cisco produce.

Though Cisco has always maintained that it is majorly a software based company with its range of software based products and services being rolled out consistently. According to the news source,CRN, one of the analysts interviewed said that if the rumored lay off was true, that would be a bit of a catch-up, as the company is moving away from hardware.

He went ahead saying, "I do not think that they are going to be done after this." The tech site went further to say that the company has already offered many early retirement packages to the employees. This news was flying around before Cisco came out to announce the actual number of employees to be laid off on Wednesday press conference. 

Cisco denied ever releasing or leaking any documents concerning the number of employees to be laid off being carried on on most tech news sites.
CRN reporter Mark Haranas also denied that the information came from Cisco, and added that his report said that the cuts would come over the next few weeks, suggesting Cisco could still add to its total beyond what was disclosed.

In trying to calm things down a bit, Cisco Chief financial Officer, Kelly Kramer said that "We are going to continue to do acquisitions, and we will continue to hire to get the right people” this year despite the restructuring.So according to her, Cisco had added about 1,600 employees through acquisitions in the fiscal year that ended July 30, and hired employees mostly focused on the company’s software and services businesses.


Saturday, 13 August 2016

Sounds from your Hard drives now make systems vulnerable?



Is it time to re-adjust the type of Hard drive one uses? Going by the research result coming out from Isreal, PCs can actually be hacked with the help of the sound coming out from the hard drive.

So it may not just be enough to have a strong password to protect one's system from hackers but now sounds from one's hard drive and the cooling fans make the system vulnerable to hackers.

Great! According to PC world, Researchers have found a way to steal a PC’s data by using the mechanical noise coming from the hard disk drives inside.
It’s not a very practical hack, but the scheme has been designed for “air-gapped” systems, or computers that have been sectioned off from the Internet.
The researchers at Ben-Gurion University of the Negev in Israel have been studying how to use sound to extract information from air-gapped computers.

In June, they showed that even a PC’s cooling fans can be controlled to secretly transmit data, including passwords and encryption keys.

In a new paper, the researchers found that a PC’s hard disk drive could also generate enough noise to do the same. They did this, by manipulating the drive’s internal mechanical arm, to generate binary signals.

Typically, the mechanical arm only reads and writes data within the hard drive. But when in use, it also creates a good deal of sound at different frequencies -- which the researchers decided to exploit.


They developed a piece of malware called “DiskFiltration” which can infect a Linux-based PC to control a hard disk drive’s operations. To record the emitted noise, the researchers placed a Samsung Galaxy S4 phone nearby to log and decrypt the signals.

They found that their hack could transmit enough 0s and 1s for a stream of data, including passwords. However, the transmission rate is quite slow at only 180 bits per minute, and the range is only effective at up to six feet.
Nevertheless, the method is covert.

A hacker could infect an air-gapped system with a USB stick, and then secretly extract the data, by simply recording the nearby sounds.





To prevent this kind of hacking, owners of air-gapped owners can consider using solid-state drives, which have no moving parts, the researcher said.








Monday, 8 August 2016

Microsoft reduces Windows 10 roll-back grace period



This is surely a drastic change of mind from Microsoft on their I changed-my-mind grace period extended in windows 10 from the initial 30 days to a shorter period of 10 days confirmed by the company.Users who upgraded to Windows 10 were able to roll back to the preceding Windows as long as they did so within 30 days.

To make that possible, Microsoft stored the older operating system in a special folder on the device's drive, consuming up to 5GB of storage space. After the grace period expired, the folder's contents were deleted.
With last week's Anniversary Update, aka version 1607, the 30 days were reduced to 10.

Microsoft said that the behind-the-scenes change had been triggered by data gleaned from the voluminous telemetry it collects from Windows 10 devices.

"Based on our user research, we noticed most users who choose to go back to a previous version of Windows do it within the first several days," a spokesman said in an email. "As such, we changed the setting to 10 days to free storage space used by previous copies."

Meanwhile, since the grace period for the free upgrade of windows elapsed on July 29, it is expected that the retail prices for an upgrade will be between $110 and $200.








Apple watch 2 to come with much faster Processor, GPS and some other features



So much improvement for a watch from apple, going by the rumors and prediction for the next apple watch 2 release, which promises to be a really well  built watch with improved processor, GPS, Barometer and some other great features.

Well according appleInsider, a well connected analyst by name Ming-Chi Kuo, from his note, said that he believes Apple is planning to launch two new Apple Watch versions in the second half of 2016, both of which offer moderate improvements over their predecessor.

The first unit will be an iterative upgrade on the original Apple Watch and is expected to sport the same aesthetics, but with improved intervals like a TSMC processor built on the 16nm process. Waterproofing should also be slightly improved.

A second version, dubbed "Apple Watch 2," is also expected to share the same general design as current models, but will include a GPS radio and barometer for improved geolocation capabilities.

A higher capacity battery will be included to power the advanced components, but its size will prohibit Apple's usual generational device slimming.





Thursday, 4 August 2016

IoT Becomes Ever Close With Microsoft Windows 10 Smart-home Centerpiece Deal



This is a huge news to the world of technological innovation, with IoT, life can get really interesting, so according to PC world, Microsoft wants to put Windows 10 at the center of smart homes. The company wants users to be able to tell the operating system's Cortana voice assistant to switch on a light, open a door, release food for a cat, and even check the contents of a refrigerator.







For Windows 10 to be successful, the OS will have to work with a wide range of smart home and IoT devices, and that goal has taken a big step forward thanks to a recent agreement between standards bodies the Open Connectivity Foundation (OCF) and the Thread Group. The two organizations will work together on improving interoperability between smart home and IoT devices.

This means devices running Windows 10 will be able to connect with most smart home products and program home automation tasks based on events or times of the day.
The new alliance will see the major IoT standards-settings groups working together to make it easy for devices to discover and communicate with one another.

"As members of OCF, we are very excited by this development and look forward to moving closer to a world where smart home devices ‘just work’ together regardless of brand or make,” a Microsoft representative said in an email.

The alliance will benefit smart-home customers, with less guesswork involved in getting devices to work together. The alliance between OCF and Thread Group will help Windows 10 devices natively support and communicate with products from companies like Nest Labs, an Alphabet company.
Multiple IoT standards have hurt interoperability between devices, and the alliance gets rid of the fragmentation in standards that's threatening the IoT and smart home markets.

Founders of the Thread Group include ARM, Samsung, Qualcomm, and Nest. The OCF brings together security, discovery, and connectivity tools from the Microsoft-backed AllSeen Alliance and the former Open Interconnect Consortium (OIC), with key members including Intel, Samsung, and Dell. The OIC was renamed OCF.

Microsoft's plan is to integrate OCF protocols -- which are due to be released in 2017 -- into Windows 10. The integration will ultimately bring the Thread Group protocols and network transports to Windows 10.

Users could then automate tasks using a Windows PC, mobile device, Xbox console, or Raspberry Pi 3. Users will be able to create profiles and assign actions for smart home devices. For example, users could establish a specific profile in Cortana like "activate smart home," which would trigger actions like switching on lights and air conditioning.

Microsoft will have to incorporate Thread APIs into its plans for OCF tools. Microsoft has already released an open-source bridge to connect OIC tools, called IoTivity, with the AllSeen Alliance's AllJoyn APIs. It will help AllJoyn devices talk to OIC-compatible IoT devices.

Source : PCworld

Popular apps with weak encyption is always a disaster, ask Turkey coup plotters

Morning after coup attempt

 When a developer forgets to give maximum attention to the encryption of apps keeping storage of sensitive data of its users, then what happened in Turkey is obviously the result in a very scale.

According to the Guardian, "Turkish authorities were able to trace thousands of people they accuse of participating in an underground network linked to last month’s failed military coup by cracking the weak security features of a little-known smartphone messaging app.

Security experts who looked at the app, known as ByLock, at the request of Reuters said it appeared to be the work of amateur software developers and had left important information about its users unencrypted.

A senior Turkish official said Turkish intelligence cracked the app earlier this year and was able to use it to trace tens of thousands of members of a religious movement the government blames for last month’s failed coup.

Members of the group stopped using the app several months ago after realising it had been compromised, but it still made it easier to swiftly purge tens of thousands of teachers, police, soldiers and justice officials in the wake of the coup." In case you want to  Read more at The Guardian



Dropbox Paper, the latest in the line of Dropbox Innovations





Dropbox is certainly innovating, with the latest release, Dropbox Paper, which allows users to collaborate or brainstorm on documents in real time.

The app which was previously limited to certain users, has been made public for anyone to try. "We built Dropbox Paper to help fast-moving teams create collaborative docs and share important information. It’s a big part of how we’re reimagining the way people work together.





We originally launched Paper to a limited number of teams in private beta. And now we’re excited to open up the beta so anyone can sign up—without the waitlist.

Plus, we have new Paper mobile apps for iOS and Android that you can use for on-the-go access." writes dropbox.com/paper . and download at the Play store or the App store.

According to her, "Early Paper users have already created over a million docs and given us a ton of useful feedback. With that input, we’ve made improvements like enhanced tables and image galleries; desktop, web, and mobile notifications; and powerful search to help you quickly find the docs you need.
And our new mobile apps let you get project updates, make edits, and respond to feedback—any time, anywhere."



Source : Dropboxblog.
















Wednesday, 27 July 2016

Blackberry Re-invention continues with DTEK50, fully Android

 







Blackberry making great efforts to bounce bank on smartphone sale is really commendable, given the latest blackberry and only their second complete android phone. Tagged by Blackberry as the worlds most secure android phone, really came with some cool and powerful features. 

DTEK50 as it is called, has a DTEK software for security features to stave off malwares and some other security bridges.It has a 5.2-inch, 1080p display, Qualcomm Snapdragon 617 processor, 3GB RAM, 13-megapixel camera, and 2,610mAh battery. The 8-megapixel front camera also includes a flash for taking selfies. 

It runs Android 6.0 Marshmallow with BlackBerry's software features, such as the Hub. The software is similar to the software on the Priv released last year. 

According to The Verge, BlackBerry says that it has modified Android with its own technology originally developed for the BB10 platform to make it more secure. The company is also committing to rapid updates to deliver security patches shortly after they are released.

Tuesday, 19 July 2016

Star Trek to have another film release


Paramount pictures announced that, the sci-fi space saga Star Trek, which arrived in 2009 at the hands of JJ Abrams, is getting a fourth entry to its franchise, Paramount Pictures has announced.
The new film will be the fourteenth overall film for Star Trek's cinematic adventures, and will see the return of Chris Hemsworth (Thor) as George Kirk, father of Chris Pine's Captain Kirk.

"In the next instalment of the epic space adventure, Pine's Kirk will cross paths with a man he never had a chance to meet, but whose legacy has haunted him since the day he was born: his father," Paramount said in a statement.

The new series will be broadcast starting January 2017 on CBS in the US and Canada. For everyone else around the world, Netflix announced this week that it would be airing the new series in 188 countries, in a deal that also secured access to the entire back catalogue of 727 episodes.


Source: gadget360


NASA’s Kepler discovers 100+ Exoplanets During Its K2 Mission


If more life supporting planets are discovered, what next for earth and her inhabitants, well according to NASA, An international team of astronomers has discovered and confirmed a treasure trove of new worlds using NASA’s Kepler spacecraft on its K2 mission.

Among the findings tallying 197 initial planet candidates, scientists have confirmed 104 planets outside our solar system. Among the confirmed is a planetary system comprising four promising planets that could be rocky.

The planets, all between 20 and 50 percent larger than Earth by diameter, are orbiting the M dwarf star K2-72, found 181 light years away in the direction of the Aquarius constellation. The host star is less than half the size of the sun and less bright. 

The planets’ orbital periods range from five and a half to 24 days, and two of them may experience irradiation levels from their star comparable to those on Earth. 

Despite their tight orbits, closer than Mercury's orbit around the sun, the possibility that life could arise on a planet around such a star cannot be ruled out, according to lead author Crossfield, a Sagan Fellow at the University of Arizona's Lunar and Planetary Laboratory.

The researchers achieved this extraordinary "roundup" of exoplanets by combining data with follow-up observations by earth-based telescopes including the North Gemini telescope and the W. M. Keck Observatory in Hawaii, the Automated Planet Finder of the University of California Observatories, and the Large Binocular Telescope operated by the University of Arizona.

Both Kepler and its K2 mission discover new planets by measuring the subtle dip in a star's brightness caused by a planet passing in front of its star.  
In its initial mission, Kepler surveyed just one patch of sky in the northern hemisphere, determining the frequency of planets whose size and temperature might be similar to Earth orbiting stars similar to our sun. 

In the spacecraft’s extended mission in 2013, it lost its ability to precisely stare at its original target area, but a brilliant fix created a second life for the telescope that is proving scientifically fruitful. Continue reading at NASA.gov

Monday, 18 July 2016

Opera mini browser sold for $600m



A Chinese consortium has bought the Opera Internet browser for $600 million (EUR 543 million or roughly Rs. 4,030 crores), its Norwegian developer said Monday, after a public share offer for the company failed. 

The Chinese consortum led by Golden Brick Silk Road will only purchase the mobile and desktop versions of opera mini browser, plus performance and privacy apps and a stake in a the chinese venture, but not the advertising, games and television units, said Opera Software Head in a statement to the Oslo stock exchange, according to gadgets 360. 

It was reported before that the $1.2billion public offer, though it was reported that it had not yet received regulatory approvals by the deadline of July 15. Opera CEO was quoted as saying in the daily Dagens Naeringsliv, "It wasn't that the approvals weren't given, just that it didn't happen before the deadline''.

Reacting further, Opera says its light, quick browser is used by more than 350 million consumers worldwide and would see a higher number of people use it once it enters the chinese market, considering that Golden Brick Silk Road Fund comprises Beijing Kunhun Tech that specialises in mobile game and Oihoo 360 that specialises in cyber security. 

Well, Opera is certainly a very sleek browser but still ranked fourth behind Google Chrome, Apple's Safari and Android Browser for the mobile ones, in the monthly rating released last month by NetMarketShare. All the best to Opera and the new buyer, hope it makes opera even much better for both mobile and desktop versions.





Pokemon certainly attracting 'Interrests', server DDOS attacked!







OurMine, a hacking group has come out to take responsibility for recent complaints by Pokemon players of the servers outage, this hack group that had attacked and compromised the social media accounts of many celebrities said on monday that they were responsible for the server outage. 

They attacked Pokemon Go's login servers with a distributed denial of service attack (DDOS), ths making many of the players frustrated as they were unable to login to the game according TechCrunch. 

According to the group, "No one will be able to play this game till Pokemon Go contact us on our website to teach them how to protect it!" the group wrote in a post on its website. An OurMine member told TechCrunch that he or she is part of a three-person group of teenagers and that the team is trying to spread the word about security.

The group said that it is promoting stronger security and that "if it did not hack celebs and DDoS popular games, someone else would. We don't want other hackers attack their servers, so we should protect their servers," the OurMine member explained. To read  my earlier post on how ddos works, go DDOS.





Japan's Softbank buys ARM for a staggering $32B


ARM is a British multinational semiconductor and software design company with headquarters in England. ARM is most popular for its design of ARM processors which are commonly used in most of the smartphones produced today. 

They announced on Monday morning that it had agreed an offer that will see it acquired by Softbank, a Japanese telecoms company for a huge $32.15 billion. SoftBank intends to use ARM's chip knowledge to expand its internet of things division as the world is gearing towards IoT.

"There is a great alignment between the way we work and the way they think about the world," said Simon Segars, ARM's chief executive in a video, posted by the company,in which he answerered questions about the acquisition.

SoftBank as part of the deal, promised to employ an additional 1,500 British staff over the next five years, as well as to grow the company outside of the UK. The chipmaker's successful business model, culture and brand will remain unchanged, the two companies posted.

Friday, 15 July 2016

Microsoft wins Appeal against DOJ



Following a court ruling in favor of Microsoft against the Department of Justice, which looks like the case Apple faced sometime this year, in-which they were all required to grant access to the government to one of their products or services that violates privacy policies of both tech giants. 

The judgment which has been delivered  according to the BBC favored Microsoft, that the US government cannot force Microsoft to give authorities access to the firm's servers located in other countries. The ruling, made by an appeals court, overturns an order granted by a court in Manhattan in 2014.

"It makes clear that the US government can no longer seek to use its search warrants on a unilateral basis to reach into other countries and obtain the emails that belong to people of other nationalities," Brad Smith, president and chief legal officer, of Microsoft told the BBC.

"It tells people they can indeed trust technology as they move their information to the cloud," he said.
Microsoft thanked the companies that had backed its appeal, which included the likes of Amazon, Apple and Cisco.



Source: BBC


Fight against DDOS attacks



An ordinary user of computer in an organization may not really care too much as to the cause of either crawling internet connection or complete outage, most of the time, an average employee outside the IT department only cares more about when the internet connection will get back to being as fast as ever. 

Well, by the time these users are complaining, the Network Admin am sure would be sweating profusely somewhere in the air-conditioned IT room to arrest the situation, for an Admin that is not very prepared for these types of situations anyway. 

I am sure most of the Network Admins or Engineers sure know how to set up their networks to mitigate this sort of threats and recoveries. What i am talking about today anyway is DDOS (Distributed Denial Of Service) attack, this sort of attack has evolved over time to some sophisticated tool. 

Formerly an old version, called DOS, Denial Of service, used to be the low key way of attacking networks from hackers, which is originating a huge amount of traffic from one source to a target server or network device in other to do many things, which include but not limited to rendering a targets internet connection dead slow or temporarily out of bandwidth, making targets network device, in this case a router or a server to drop packet requests as a result of overwhelming packet requests flooding the link. 

Now we have DDOS, which is originating an insane amount of traffic from different sources in the internet to a target machine, links or servers, these sources are computer systems that have been compromised, with potentially one of the computer system being the DDOS master or botmaster. 

These computer systems that have been infected with malware (vulnerable computer systems), will then be under the control of the master DDOS computer system, and will carry out the instruction to send or forward traffic to a chosen target without even the knowledge of the users. 

These computers are called botnets, and they can overwhelm about any network, given their number, and the packets emanating from each of them or better still, a botnet is a gang of Internet-connected compromised systems that could be used to send spam email messages, participate in DDoS attacks, or perform other illegitimate tasks. 

The word botnet comes from the words robot and network. The compromised systems are often called zombies. Zombies can be compromised by tricking users into making a "drive-by" download, exploiting web browser vulnerabilities, or convincing the user to run other malware such as a trojan horse program. 

This attack can be targeted at financial institutions, news sites, any organization with great standing and can be as large as maxing out the bandwidth of a country. 

So the sole aim of this type of attack is preventing the legitimate users from accessing their systems or sites and in some other instances, it can be a smokescreen to camouflage and do more dangerous and sensitive stuffs, like stealing very sensitive information from a server or system. 

Because of the nature of this attack, that is emanating from thousands of machines at once, they can be very difficult to stop by simply blocking traffic from machines, especially when the attackers forge IP address of attacking computers, thereby making it so difficult for network defenders devices to filter traffic based on IP addresses. 

These attacks are not just limited to computers and web servers, a variation of the attack can also target phones and phone systems, which was reported some time ago in Ukrain, where hackers caused power outage at two plants, and launched a telephone denial of service attack against customer call centers to prevent residents from reporting the outage to the companies. 

So we are in the era of high DDOS sophistication and so many reports in the media about varied ranges of attacks carried out using DDOS, from country to country to institutions being constantly attacked.

There are some specific common types of DDOS attacks, which are

ICMP (Ping) Flood
Similar in principle to the UDP flood attack, an ICMP flood overwhelms the target resource with ICMP Echo Request (ping) packets, generally sending packets as fast as possible without waiting for replies. 
This type of attack can consume both outgoing and incoming bandwidth, since the victim’s servers will often attempt to respond with ICMP Echo Reply packets, resulting in a significant overall system slowdown.
PING of Death 
A ping of death ("POD") attack involves the attacker sending multiple malicious pings to a computer. The maximum packet length of an IP packet (including header) is 65,535 bytes. 

However, the Data Link Layer usually poses limits to the maximum frame size - for example 1500 bytes over an Ethernet network. In this case, a large IP packet is split across multiple IP packets (known as fragments), and the recipient host reassembles the IP fragments into the complete packet. 

In a Ping of Death scenario, following malicious manipulation of fragment content, the recipient ends up with an IP packet which is larger than 65,535 bytes when reassembled. This can overflow memory buffers allocated for the packet, causing denial of service for legitimate packets. 

HTTP attack
In HTTP flood DDoS attack the attacker exploits seemingly-legitimate HTTP GET or POST requests to attack a web server or application. HTTP floods do not use malformed packets, spoofing or reflection techniques, and require less bandwidth than other attacks to bring down the targeted site or server. 

The attack is most effective when it forces the server or application to allocate the maximum resources possible in response to each single request.  

TCP Connection attack
An attack of this nature exploits a known weakness in the TCP connection sequence (the “three-way handshake”), wherein a SYN request to initiate a TCP connection with a host must be answered by a SYN-ACK response from that host, and then confirmed by an ACK response from the requester. 

In a SYN flood scenario, the requester sends multiple SYN requests, but either does not respond to the host’s SYN-ACK response, or sends the SYN requests from a spoofed IP address.

Either way, the host system continues to wait for acknowledgement for each of the requests, binding resources until no new connections can be made, and ultimately resulting in denial of service.

 SLOWLORIS
Slowloris is a highly-targeted attack, enabling one web server to take down another server, without affecting other services or ports on the target network. Slowloris does this by holding as many connections to the target web server open for as long as possible. 

It accomplishes this by creating connections to the target server, but sending only a partial request. Slowloris constantly sends more HTTP headers, but never completes a request. The targeted server keeps each of these false connections open. 

This eventually overflows the maximum concurrent connection pool, and leads to denial of additional connections from legitimate clients.

 In all these specific types, they can be categorized into three, when defending against them, there are, the Volume based attacks, protocol based attacks and application based attacks.

To be properly prepared to defend the network infrastructure from DDoS attacks, it is extremely important to know as soon as possible that there is anomalous behavior, malicious or otherwise, occurring in the network. 

Having a pre-emptive awareness of malicious or nefarious behaviors and other incidents in the network will go a long way toward minimizing any downtime that impacts the network's data, resources, and end users.

The challenge in preventing DDoS attacks lies in the nature of the traffic and the nature of the "attack" because most often the traffic is legitimate as defined by protocol. 

Therefore, there is not a straightforward approach or method to filter or block the offending traffic. Furthermore, the difference between volumetric and application-level attack traffic must also be understood.

Volumetric attacks use an increased attack footprint that seeks to overwhelm the target. This traffic can be application specific, but it is most often simply random traffic sent at a high intensity to over-utilize the target's available resources. Volumetric attacks generally use botnets to amplify the attack footprint. Additional examples of volumetric attacks are DNS amplification attacks and SYN floods.

 Application-level attacks exploit specific applications or services on the targeted system. They typically bombard a protocol and port a specific service uses to render the service useless. Most often, these attacks target common services and ports, such as HTTP (TCP port 80) or DNS (TCP/UDP port 53).

Lets look at a few Cisco approved ways of mitigating these attacks, though there is no single method to fight this DDOS attack, it's a combination of many strategies. 

Geographical Dispersion (Global Resources Anycast)
A newer solution for mitigating DDoS attacks dilutes attack effects by distributing the footprint of DDoS attacks so that the target(s) are not individually saturated by the volume of attack traffic. This solution uses a routing concept known as Anycast. 

Anycast is a routing methodology that allows traffic from a source to be routed to various nodes (representing the same destination address) via the nearest hop/node in a group of potential transit points. This solution effectively provides "geographic dispersion.

Route Filtering Techniques
Remotely triggered black hole (RTBH) filtering can drop undesirable traffic before it enters a protected network. Network black holes are places where traffic is forwarded and dropped. When an attack has been detected, black holing can be used to drop all attack traffic at the network edge based on either destination or source IP address

 Unicast Reverse Path Forwarding
Network administrators can use Unicast Reverse Path Forwarding (uRPF) to help limit malicious traffic flows occurring on a network, as is often the case with DDoS attacks. This security feature works by enabling a router to verify the "reachability" of the source address in packets being forwarded. 

This capability can limit the appearance of spoofed addresses on a network. If the source IP address is not valid, the packet is discarded. uRPF guards against IP spoofing by ensuring that all packets have a source IP address that matches the correct source interface according to the routing table.

Reputation-Based blocking
Reputation-based blocking has become an essential component to today's web filtering arsenal. A common trend of malware, botnet activity, and other web-based threats is to provide a URL that users must visit for a compromise to occur. Most often such techniques as spam, viruses, and phishing attacks direct users to the malicious URL.

Reputation-based technology provides URL analysis and establishes a reputation for each URL. Reputation technology has two aspects. The intelligence aspect couples world-wide threat telemetry, intelligence engineers, and analytics/modeling. The decision aspect focuses on the trustworthiness of a URL. Reputation-based blocking limits the impact of untrustworthy UR.

And many other ways, you can read more at
Incapsula
CISCO



Tuesday, 12 July 2016

Pokemon Go...Another Augmented reality Game with "insane" following

When i wrote in one of my articles sometime ago, i made mention of Augmented and Virtual realities going to really take the world by storm. Fast forward to the present day and time, where a game is literally driving the world crazy, the number of times the game developers servers crashed as a result of too much request for downloads from the server that maybe was not so prepared for the amount of traffic its receiving from around the countries allowed to play it. 

So whats Pokemon Go, it is an augmented reality location-based game, developed by Niantic Inc. Now before we go into details about it, it's worth knowing that Pokemon Go is not approved in all countries at least for now, yet it is hotly on the tail of twitter for the highest number of active users. U.S.A, Australia and New Zealand are the only countries it was officially launched in, but this has not prevented the game being sideloaded using the APK file online,  this game i must confess is really full of adventures. 

Pikachu parades around suburban Tokyo

Explaining the game further, "In simple terms, Pokémon Go uses your phone’s GPS and clock to detect where and when you are in the game and make Pokémon "appear" around you (on your phone screen) so you can go and catch them. As you move around, different and more types of Pokémon will appear depending on where you are and what time it is. 

The idea is to encourage you to travel around the real world to catch Pokémon in the game. (This combination of a game and the real world interacting is known as augmented reality. So why are people seeking out virtual creatures while at work and as they go to the bathroom? Part of the reason Pokémon Go is popular is that it’s free, so it’s easy to download and play. 

But more importantly, Pokémon Go fulfills a fantasy Pokémon fans have had since the games first came out: What if Pokémon were real and inhabited our world? But to understand why people are so enthusiastic about the idea, we first need to go back to the late 1990s — to the original Pokémon games.
The Pokémon games take place in a world populated by exotic, powerful monsters — they can look like rats, snakes, dragons, dinosaurs, birds, eggs, trees, and even swords. 

In this world, people called "trainers" travel around the globe to tame these creatures and, in an ethically questionable manner, use them to fight against each other. Based on the premise of bug catching — a popular hobby in Japan, where the games originated — the big goal in the Pokémon games, from the original Pokémon Red and Blue to the upcoming Pokémon Sun and Moon, is to collect all of these virtual creatures.

But since the games came out for Nintendo’s handheld consoles, fans all around the world have shared a dream: What if Pokémon weren’t limited to the games’ world? What if they were real and inhabited our world? What if we could all be Ash Ketchum, the TV show’s star trainer, who wanders the world in his quest to catch them all and earn his honors by defeating all the gym leaders? I want a Pikachu in real life, dammit! 

Unfortunately, Pokémon aren’t real — at least not yet. But technology has evolved to be able to simulate a world in which Pokémon are real. That’s essentially what Pokémon Go attempts to do: By using your phone’s ability to track the time and your location, the game imitates what it would be like if Pokémon really were roaming around you at all times, ready to be caught and collected. 

And given that many original Pokémon fans are now adults, this idea has the extra benefit of hitting a sweet spot of nostalgia, helping boost its popularity." So after Pokemon Go, what next in the world of Augmented and virtual Realities, this technology is going to drive the world into some of the century's most enterprising innovations. 

source: vox



Saturday, 9 July 2016

Battle of machines for Cyber defense (The World's first Machine Hacking Tournament)

When i saw this online, i couldn't help but think of PoI (for non movie followers, it's Person of Interest), where the 'machine' engaged in serious battle with the great but controlling Samaritan, it was the battle of super AIs. 

Well if you have watched the movie, chill, it's not that type of intelligent machines that will be doing battle in the world's first machine tournament, at least according to the organizers DARPA (Defence Advanced Research Project Agency).

 

This has everything to do with IoT, because logically, the more connected and sensitive devices, the more exposed they will be when they are eventually connected. Some of the devices that will be interconnected in the IoT, will be security vulnerable and prone to cyber attacks. 

So in other to gain ground before we enter the IoT age on cyber attacks, DARPA, a part of US defense Department, decided to start this program in earnest. According to DARPA web site, 
"Today's approach to cybersecurity depends on computer security experts: experts identify new flaws and threats and remediate them by hand. This process can take over a year from first detection to the deployment of a solution, by which time critical systems may have already been breached. This slow reaction cycle has created a permanent offensive advantage.

The Cyber Grand Challenge (CGC) seeks to automate this cyber defense process, fielding the first generation of machines that can discover, prove and fix software flaws in real-time, without any assistance. If successful, the speed of autonomy could someday blunt the structural advantages of cyber offense."
This if successful, will really go a long way in helping realize the Internet of Things dream, because whether we like it or not, many sensitive infrastructures will be interconnected, and it would be disastrous if cyber hackers should take control of these devices easily. 

For some of us really far away in terms of location, but not that far in terms of connection, we will be looking forward to 4th of August, which is the fixed date for the final from our screens.


 

Is Anti Virus really useless?

The subject of Anti virus being useless is not something that started today, but the manner of technological improvement and sophistication of hackers, the debate is back again as to the relevance of anti virus software in our machines. Personally i don't think it's entirely useless giving my at least 8 years experience of using different ones, both paid and free. Go here to see some of the view points on the topic.

Basics of Campus network Design part 1

A complex network in this context, can either be an Enterprise network or a Campus network. When we talk about designing a complex network, we should be able to apply the same engineering principles in other not to make a mess of our network. 

So in my brief article today, i am going to highlight some two obvious reasons we should endeavor to structure our network designs. 

In Engineering, especially software Engineering, we were taught that to write or design a complex program, that will meet all requirements and run smoothly, we have to follow some laid down design principles. 

As in Engineering design, it also applies to Network design, in that, to design a network that is highly available (in our world today, some organizations require close to 100% availability), airtight security, High flexibility, manageability and very crucially, scalability. 

If after designing a beautiful network and there arise need in the future for more devices to be connected to the network and it warrants tearing down all or most part of the already functional network to do this, then i must tell you, it was never a good design and unscalable. 

So we are looking at design principles of Hierarchy and Modularity, there are others like Resiliency and Flexibility, but my focus on this particular article is on Hierarchy and modularity. These are principles that are inter-connected, meaning they are all equally important in your network designs. 

Now whenever one is designing any large network entity, it is very beneficial to build it using a set of modularised components that can be put together in a hierarchical manner. 

When these systems are divided into components or modules, each of them can be designed with some independence from the entire system and all these modules can be operated as semi independent elements, meaning you have higher availability, simpler management and operation, which is very crucial. 

Now when you don't isolate these modules, to make changes to your network without affecting the whole network, then it would be absolutely challenging to maintain and run the network smoothly, which is not supposed to be so. 

It is the best design practice that you can routinely make repairs or changes to some parts of your network without compromising the whole network availability, also, if one part of your network experiences problem and it affects the entire network, then the design is not proper at all. 

This is a serous design flaw, which many a time some network engineers fall victim of, maybe in a rush or in a bid to beat deadline, one decide to take the less challenging route which at the end of the day will come back and hunt him. 

Looking at the hierarchical design, a Cisco publication on this pointed out that we should ask two questions before diving in to this particular structured principle of design. 

"First, what is the overall hierarchical structure of the campus and what features and functions should be implemented at each layer of the hierarchy. Second, what are the key modules or building blocks and how do they relate to each other and work in the overall hierarchy." 

Campus networks ordinarily have a three tier-hierarchical model, which are , the core, the distribution layer and the Access layer. 
Hierarchical Design of a Campus Network
The Core:
The core layer from the diagram above comprises of four high speed router-switch processor, which as can be seen above is the building block of the network. 

At the core, it serves a specific functions and services, just like the other parts, which is the beauty of hierarchical design, in that every of the three layer has a specific role to play in each of the design. 

This core is designed to be highly available, (its non-negotiable) and operate non stop. To design this core the best way is to provide a good level of redundancy, in the case of disaster or link failure or any form of interruption, there will be immediate data-flow recovery. 

It should also be designed bearing in mind that occasional hardware and software upgrade/change cannot interfer with network apps. As we know, this is the backbone of the network, it holds all the parts of the network architecture together, so it provides connectivities to both the end devices, data storage services and other computing in the network, making its availability extremely important. 

Now it's not like in every campus design, we look to establish these three hierarchy, in some campus designs, core can be collapsed into the distribution layer depending on how closely the buildings are or if it's in one building. But regardless of the campus setting, the major aim of establishing the core is for fault isolation and backbone connectivity

Distribution:
From my diagram, distribution and access layers are together, well, distribution layer acts as the linkage and control boundary between the core and access layers. 

So it serves multiple purposes here, one of which is it being an aggregation point for all of the access layer switches and also participate in the core routing design. 

Another role being the provision of  policy control, aggregation and isolation demarcation point between the campus distribution building block and the other parts of the network. 

So by the need to act as an interface to both the access and core, distribution layer functions depend on the requirements of both the core and access layers.

Access:
This is the edge of your network, the first point of design. This is where your end devices reside, (printers, PCs, scanners, cameras etc) and attach to the wired potion of the network.

If you are looking at where your wireless APs reside or the IP phones, it is here at access layer. This is where the demarcation between the network infrastructure and the computing devices take place, meaning it is the first layer of defense in the network security architecture.

When we look at the hierarchical design image above, we see a design that is scalable, a design that has potential to be extended as much as you want it without disrupting the entire structure, unlike the type in the image below,

Network Topology without a core

Designing a campus network without a core, as we can see above has several limitations, chief among which is scalability

It does not have room at all for future expansion without compromising the entire or most part of the network, which greatly undermines the 100% availability cravings of some Establishments. More to follow in the subsequent articles concerning network designs fundamental.


Saturday, 2 July 2016

News Flash!

 HP wins $3Bn court case against Oracle

Oracle ordered by the court to pay HP a mind boggling $3bn in damages, in a law suit instituted against Oracle. In the legal battle that has been ongoing for some time now, was in favour of HP, after the Judge ruled against Oracle after going through the arguments presented by both parties. For more go to BBC.com


Why the Excitment about the coming of 5G network is Justifiable

5G evolution
Internet of Things, Driverless cars, smart cities, remote surgeries, and most of the high tech innovations on the pipeline, are all very tied to this next big technological hit. For IoT and all the Billions of devices that will be interconnected in a few years from now, an exceeding high speed internet connection will be non negotiable if it's to be a success. 

The same goes for driverless cars that are speedily being worked on, it will also require high streams of data, fast, reliable and ruthlessly efficient wireless connection. Smart cities, remote surgeries are not left out in this as each one of them crucially need extremely fast and very low latent connection. 

In come the 'boss', with all due respect to the current generations of wireless connections, non of them is even remotely close to being capable of what this new generation of wireless connection will do. So looking at the expected features of this next generation wireless connection, you will be astonished as to the projected and expected features. 

It is expected as said by a professor at the university of Surrey, "that the 5G will be a dramatic overhaul and harmonization of the radio spectrum". Now from the words of prof, harmonization of the radio spectrum is very important in what this super fast and all-encompassing 5G network brings. 

But currently there is some sort of bottlenecks to achieving this harmonization in the industry which ITU (International Telecommunication Union) is comprehensively restructuring the parts of the radio network used to transmit data, while allowing pre-existing communications, including 4G and 3G, to continue functioning, this will lead to smooth and efficient running of 5G.

According to Prof.Tafazolli of the university of Surrey, it is possible to run a wireless data connection at an unbelievable 800Gbps - that's 100 times faster than current 5G testing. A speed of 800Gbps is an insane speed in the data world, put simply that a speed of 800Gbps would be equivalent to downloading 33 HD films - in a single second, that's some breathtaking speed and a whole lot faster than what 4G is capable of providing.

Another great thing about this next generation wireless connection according to Sara Mazur (head of Ericsson Research) is "that the network will need to cope with a vast increase in demand for communication". 

What this means is that 5G will be able to cater for the astronomical increase in the number of interconnected devices (IoT), that have been projected to increase to several billion devices by the year 2020 and beyound. 

So 5G is the technology that will have the capacity to handle this number of devices connected and running efficiently with very minimal latency and disruptions. 

Raising the capacity of this next generation network is like widening a road tunnel, which if you add more lanes more cars can go through (more data). And ordering makes it more efficient.
One other feature of this 5G that will put the other generations in the shade is about it's reliability. 

"It will have the reliability that you currently get over fibre connections," says Sara Mazur of Ericsson. So in effect, this will lead to an end to sudden data connection drops-outs, which will be very crucial in a highly connected world, where even the slightest of drops in situations like using Robots remotely to operate and conduct advanced processes could be fatal. 

This next generation wireless connection promises to be something very special in our ever changing world of high speed internet connection, with several other features like bi-directional bandwidth, its policies to avoid error and many others.



Driverless Cars...BMW, Intel and Mobileye team up

In a bid to harness the power in the innovation of driver less cars, BMW a German Auto giants came together with Intel and an Israeli based tech giant, Mobileye to work towards the development of world class driverless cars. Driverless cars have been a subject of research and development of many Tech giants, who are very determined to achieve successful production and testing of these driverless cars. 

So in a bid to roll out these cars by at least the year 2021, BMW teamed up with these firms, that  highlights a shift in the dynamics of research and development in the car industry, which until recently saw auto makers largely dictating terms for suppliers to manufacture their proprietary technologies at specified volumes and prices. 


Presently, Auto makers are hooking up with technology firms who have some level of expertise in the machine language and mapping, which are essential ingredients in the making of driverless cars a reality. In a joint news conference announcing the alliance, Intel Chief, Brian Krzanich said  that "Highly autonomous cars and everything they connect to will require powerful and reliable electronic brains to make them smart enough to navigate traffic and avoid accidents.” 

Now to produce  driverless cars with no driver either on the seat or any of the front seats, will require some huge computing power and software sophistication which these traditional Auto makers may not handle expertly on their own. In continuation with the press statements, the intel Chief stated that creating common technology standards would help all manufacturers update their vehicles faster, "That will be critical for advancing the safety aspects of this." 

Beyond technological hurdles there are legal questions over who is responsible when a crash occurs. On Thursday (30th), the driver of a Tesla Model S car, operating in Autopilot mode, was killed in a collision with a truck in the United States, prompting an investigation by federal highway safety regulators.


When asked about the crash, BMW CEO Harald Krueger said: "The accident is very sad .... We believe today the technologies are not ready for series production," he added, explaining the alliance had not forecast that until 2021. "For the BMW group, safety comes first," he said. 

As part of the new alliance, Intel, the world's largest computer chip maker which has been looking to expand into the automotive electronics market, will supply the microprocessors - or central processing units - to control an array of sensors.

Auto camera and software maker Mobileye will supply its Road Experience Management (REM) technology and make its latest EyeQ5 chip available to be deployed on Intel computing platforms.
The three companies said they would demonstrate their technology in a prototype in the near future.

A common approach to standards will also make it easier for regulators to understand and approve the road worthiness of a vehicle while still leaving enough scope for individual car manufacturers to customize their cars, Mobileye Chairman Amnon Shashua said.

The future of driverless cars, is something to look forward to with keen interest. Source: Reuters.

 

Friday, 1 July 2016

3D printing Processes


When discussing 3D printing technologies, it is important to understand that 3D printers have varying grades and levels. While some were the early birds in terms of their date of production and when they started working both commercially and in non commercial capacity. 

3D printing has many different types of printing processes which i will discuss in details. These processes are as listed  below:
  • StereoLithography (SL)
  • Fused Deposition Modelling / Extrusion / FFF
  • Digital Light Processing (DLP)
  •  Laser Sintering / Laser Melting
  •  Inkjet: Binder Jetting and Material Jetting
  •  Selective Deposition Lamination (SDL)
  • Electron Beam Melting 
StereoLithography (SL) :


 This 3D printer was probably the first 3D printer in the market, it was the earliest bird to the market, as it was the first 3D printer to be commercialized and made available to the market for purchase and use.  

I will be using the short form of this printer, SL, a laser based process which works with photo-polymer resins (highly viscous substance), going into reaction with this laser and thereafter, it is UV cured to form a very solid and accurate parts.

It is a complex process, but can simply be explained thus, the photo-polymer resin is held in a vat with a movable platform inside. 

A laser beam is directed in the X-Y axes across the surface of the resin according to the 3D data supplied to the machine (which is the .stl file), and the resin hardens precisely where the surface was exposed to the laser. Once the layer is completed, the platform within the vat drops down by a fraction (in the Z axis) and the subsequent layer is traced out by the laser. 



StereoLithography 3D printing process
This continues until the entire object is completed and the platform can be raised out of the vat for removal.
Now due to the nature of the SL process, it requires support structures for some of its parts, specifically those with overhangs or undercuts and theses structures need to be manually removed.

Curing here seems a little different from the curing we know in medicine, this one involves subjecting the part that was exposed to laser to intense light in an oven-like machine to fully harden the resin.
This particular 3D printer is generally accepted as being one of the most accurate 3D printing processes with excellent surface finish. However limiting factors include the post-processing steps required and the stability of the materials over time, which can become more brittle.


 Fused Deposition Modelling / Extrusion / FFF:
This 3D printer is currently an industry grade 3D printing process which works by melting plastic fillament that is deposited , through a hot extrude, a layer at a time, unto a built platform according to the 3D data supplied to the printer. 

3D printing Process with FDM
 Now each of this layer hardens as it is deposited and it bonds to the previous layer.
 The developer of this technology (Stratasys) has developed a range of proprietary industrial grade materials for his Fused Deposition Process (FDP) that are suitable for some production applications.The FDM / FFF processes require support structures for any applications with overhanging geometries. 


For FDM, this entails a second, water-soluble material, which allows support structures to be relatively easily washed away, once the print is complete. Support structures, or lack of it, have generally been a limitation to the entry level FFF of 3D printers. However, as the systems have evolved and improved to incorporate dual extrusion heads, it has become less of a problem.

In terms of models produced, the FDM process from Stratasys is an accurate and reliable process that is relatively office/studio-friendly, although extensive post-processing can be required to improve its performance relative to other printers in the market.

Digital Light Processing (DLP):
 DLP is another 3D printing process, which like the Setrolithography process utilizes the photopolymers. Remember i talked about the light source of SL as the UV, well that is the major difference between the two types of printing processes. 


Digital Light Process

DLP- makes use of light sources like arc lamp, with a liquid crystal display panel or a DMD (Deformable Mirrow Device), which is applied to the entire surface of the vat of photopolymer resin in a single pass, thus making it faster than SL. Note also that like SL, DLP produces highly accurate parts with excellent resolution, but has similar fault, which include the same requirements for support structures and post-curing. 

A Digital Light Process



 To be continued...

Thursday, 30 June 2016

World's smallest 3D printed lens could change surveillance system forever

Almost every aspect of our lives and the things we use is being changed by the introduction of 3DP. Planting of surveilance cameras like many other aspect of technology is about changing completely with the advent of 3D printing technology. 

The world's tiniest 3D printed lens
News coming out of Germany suggets this innovation has achieved something very remarkable again with the creation of  the world's tiniest lens that was 3D printed, which according to the scientists that made it, is just twice the width of a human hair, that's really ridiculously small for a lens. 

This lens could revolutionalize not just surveillance cameras but also health imaging, robotics and drone techs said the makers. Dr Timo Gissibl (Stuttgart University, Germany) and some of his colleagues explained in a published paper in the Nature Photonics of this week, how they3D printed a triplet lens devices by combining three of the lenses into a 'Pinhead' device. 

He continued that this Pinhead device is really capable of razor sharp pictures and can be printed directly onto image sensors other than optical fibres, such as those used in digital cameras, or even the tip of an endoscope ( a camrea used for internal examination of organs). 

When put into perspective, one will understand how this technology will even affect the current electronic pill being produced, as it requires lens of such tiny nature. Continued, Dr Gissibl and colleagues wrote: "Current lens system are restricted in size, shape and dimension by limitations of manufacturing. 

Multi-lens elements with non-spherical shapes are required for high optical performance and to correct aberrations when imaging at wide angles and large fields. Here, we present a novel concept in optics that...Opens the new field of 3D printed micro and nano-optics with complex lens designs." 


They have achieved something very remarkable no doubt in 3D printing, which made them believe that their innovation is a "paradigm shift." 

This 3D printed lens was made by Dr Gissibl and his colleagues by using a device which emits short pulses of light to harden material onto which the 3D multi-lens system could be printed. 

In addition, Dr Gissibl said "that the unprecedented flexibility of our method paves the way towards printed optical miniature instruments such as endoscopes, fibre-imaging systems for cell biology, new illumination systems, miniature optical fibre traps, integrated quantum emitters and detectors, and miniature drones and robots with autonomous vision." 

Wow, what can be integrated with this tiny device can really stretch very far, because our world presently is going more of the way of miniaturization, making things smaller and smaller with eachpassing time.