PlayStation 4 Highlights

Apart from the constant streaming issues that plagued Sony’s online announcement, it is safe to say that everyone was impressed with what the PS4 had to offer in terms of social capabilities and performance. The PS4 has been designed from the groundup to be remarkably easier to develop for and bodes good news for PC gamers since ports will most likely improve a lot due to the similarities in the architecture. Although it is a shame to see the Cell Processor depart from Sony’s lineup because of it’s capabilities,  developers should be able to achieve new feats with the architecture that is now before them. Perhaps in time Sony will return to the Cell processor but for now let’s look forward with what x86-AMD64 has to offer.

PlayStation 4 Gaming Hardware

Sony’s PS4 includes the long-awaited x86-AMD64 “Jaguar” 8 Core processor, and a GPU capable of 1.84 TFLOPS, which is based on AMD’s next-generation Radeon GCN. The Jaguar processor itself is a concern because of the fact that it was intended for ultrabooks and is likely to have a low clockspeed, although it has an integrated GPU which should aid in the graphics department; we may even see some instances of CrossFire present itself providing a significant boost for graphics performance. The 8GB of GDDR5 unified memory presents even more benefits for graphics performance and should help the processor offload some of the tasks onto the GPGPU and Sony has said that it has allowed for easier use of the GPU to allow better access to the GPU.

The main concern I had with the processor was with the fact that CPU is the main heart of the system and should be able to process  intensive features and applications, for example I can’t imagine soft-body physics running well on the CPU alone. This is also why I found the choice to use AMD interesting because Intel currently dominate AMD when it comes to performance, what is likely happening here is that Sony wants to keep the cost of the system and the TDP of the system down.

Image courtesy of TechPowerUp
Image courtesy of TechPowerUp

For those of you interested with what the Jaguar has to offer, look at the images above. The GPU is speculated to be slightly better than the AMD HD 7850 due to the similarities of the processing power, however we can expect better performance because of the controlled environment the PS4 will be developed in. The GPU is estimated to have 18 compute units with 1152 stream processors which collectively generate 1.84 Teraflop of processing power, hopefully our talented developers can make full use of the system, the details of the HD 7850 are listed below.

AMD Radeon HD 7850 GPU

  • 860Mhz Engine Clock
  • 2GB GDDR5 Memory
  • 1200Mhz Memory Clock
  • 153.6 GB/s memory bandwidth
  • 1.76 TFLOPS Single Precision compute power
  • 256-bit GDDR5 memory interface
  • PCI Express 3.0 x16 bus interface

GCN Architecture

  • 16 Compute Units (1024 Stream Processors)
  • 64 Texture Units
  • 128 Z/Stencil Units
  • 32 Color ROP Units
  • Dual Geometry Engines
  • Dual Asynchronous Compute Engines (ACE)
  • DirectX 11.1-capable graphics
  • OpenGL 4.2 support

PlayStation® Network

PlayStation Network was also completely re-hauled and redesigned to suit the way the user interacted online, a lot of attention was put into how the user interacted with other people players online or more specifically their “friends”. It’s been made quite clear that companions can offer assistance in certain situations when playing a game and through the services provided through Gaikai, it is now possible to stream live demos to your console, which in itself is pretty good and innovative. Gamers can also share their experiences online, on social networking sites such as Facebook through the use of pushing a single button on the PS4 controller and with this you can expect to see 100% more YouTube videos online when the PS4 gets released.

Sony’s redesigned PlayStation Network

The PS4 also allows the user to broadcast their gameplay in real-time to friends using live internet streaming services such as UStream. It is also noted that friends can contribute to each other by providing in-game items and potions to each other which is an example of the way PlayStation Network allows interacting with friends in a whole different way like I just mentioned earlier, the results of such an endeavor remain to be seen but surely boasts a sign of being truly connected.

Integration with PS Vita and Smartphone devices

Sony seem to be taking a leaf out of Nintendo’s book, as it joins the second screen club along with Microsoft in an effort to compete with Nintendo. Sony seem to be doing some fresh marketing for the PS Vita as well which clearly needs a certain boost after disappointing sales, Sony may even revolve its strategy around the Vita and PS4 in an effort to sell more units. So far the Vita has been shown to be able to act as a second screen displaying maps and other content, they have even released their own app called the  PlayStation App which will introduce the same features on Tablets and Smartphones in direct competition to Microsoft Smartglass.

Knack, a PS4 game running on a PS Vita

It’s clear that Sony is not pulling any punches here but it will take a lot of effort to push this feature ahead and whether it can negate a security risk with the use of Smartphones being used to purchase content. The main feature that the PS Vita seems to demonstrate is the ability to pull PS4 games on the PS Vita over Wi-Fi, similar to how the Nintendo Wii U Gamepad communicates with the main console, only this time the PS Vita is employing a much superior screen which is sure to stifle discontent within Nintendo Wii U owners.

PS4 Immediate Gameplay, Personalized Content and new Hardware.

Sony has managed to implement a “suspend mode” which keeps the system in a low power state while preserving any game session, what this basically means is that PS4 will implement an advanced version of sleep mode which will allow you turn off your console at any time and turn it back on whenever you feel like it. The PS4 will also include the ability to play music and use the web browser while playing a game as well something that has been long overdue on the PS3. PS4 also implements another PC like feature such as downloading or updating games or even in standby mode, and when the user continues to purchase content, the system will try to predict what else the player may also like and download it before the user even presses the download button.

Sony are introducing PlayStation Move functions their new controller Dualshock 4 as well, which is interesting since they had similar technology in Dualshock 3 as well which was is something that was known as motion sensing on “Six-axis”, this includes a three-axis gyroscope and three-axis accelerometer. It allowed the gamer to control certain functions of the game by positioning the controller in a certain way, like a Wii remote. For example, in Heavenly Sword, Kai had a bow and arrow which allowed the player to control the trajectory of the arrow in flight by positioning the controller in certain ways. It is unknown how Sony’s PlayStation Move technology would add upon what has already been designed but my guess would be to refine the accuracy of the motion sensor. The controller is overall similar to the DualShock 3 with several new features such as the built-in touch pad in the front of the controller. There is a light bar that can display different colors and is used to indicate which player is which, a 3.5 mm stereo headset jack and micro USB port was also added as output connectors to the controller.

File:DualShock 4.jpg
DualShock4

Sony has also implemented an improved PlayStation Eye which includes two 1280x800px cameras and the lenses will have an aperture of f/2.0, with 30 cm focusing distance, and an 85 degree field of view, the two lenses will be used to triangulate the 3d space, include gesture recognition and have body tracking; it’s basically Sony’s version of Kinect. One camera will be dedicated to capturing whatever is going on and ensuring a good picture quality whilst the other camera is dedicated to motion tracking. Apparently, the reason why the Move functionality was incorporated into the DualShock is to enable the console to know where you’re sitting in relation to the TV. It will also have a four-channel microphone array and will be able to record in RAW and YUV uncompressed formats. Hopefully Sony will allow us to record whatever we want on these cameras and there is also the potential for video chat as well.

Sony’s new PlayStation Eye

Cloud Gaming and Gaikai

Sony is re-introducing their cloud-based music subscription service as Music Unlimited and cloud-based video subscription Video Unlimited which were present on the PS3, Sony seem to be taking a huge bet on streaming technology for their next-gen system as the war for faster internet speeds continue. Sony are also introducing a new feature into their online portfolio which will people to “stream” a portion of any game instead of downloading a demo using Gaikai as their medium. Sony intends for gamers to play the games that they actually love instead of having to pay for it to try it out, the introduction of streaming technology is likely to be a hit and miss if internet speeds don’t increase to correspond with the introduction of technology such as this. Sony also plans for backward compatibility to be available through this very same streaming process which is likely to receive a negative reaction from the gaming community due to the fact of consumers having to pay for their games again, it’s unknown how Sony intends to compensate this flaw.

Personally, I don’t see why Sony can’t create a software emulator for their previous consoles instead since it will be much easier and leave the demo’s for Gaikai’s streaming service. If software like PCSX2 can be developed to play PS2 games on PC using x86 hardware I don’t see why Sony can’t do something similar on their console, Microsoft took this root for their Xbox 360 and it has done them well ever since because it negates the costs of having previous hardware having to be implemented into the system thus inflating the price of the product. Nintendo are quite lucky in this situation since their technology for the Wii and Wii U is very similar which allows them to emulate past software without much effort, it’s clear Sony needs to join the club in this matter of computing.

Conclusion

We can expect to see more details on the console in E3 maybe even an unveiling of the actual machine there were supposed to unveil yesterday which is very disappointing, pricing was estimated to be around the £270-£340 range so should be affordable to most people in the economic climate. From now until June we should see more games being unveiled from developers who are interested in making the most of the PS4.

GK110 Geforce Titan Finally Unveiled!

The beast has finally been unleashed! Nvidia has finally decided to launch the super-performer Geforce Titan for the consumer market and for those with deep pockets. The GK110 has 15 SMXes which is composed of a number of functional units, the GK110 also has 192FP, 32 CUDA cores, 64KB of  L1 cache, 65k 32bit registers, and 16 texture units. To coincide with these specifications, the GK110 invokes 6 ROP partitions, each one having 8 ROPS, 256KB of L2 cache which is connected to a 64 bit memory controller.

Coming in at a massive 7.1 billion transistors, it takes up 551mm2 on a 28nm process from TSMC. The GK110 was originally meant to be the flagship of the 600 series but NVIDIA had other plans and held the GPU at bay and was promptly replaced with the GK104 core instead. After winning the super-computing bid for the Oak Ridge National Laboratory’s Titan supercomputer, Nvidia probably emptied their pockets of all the GK110 cores with the Tesla K20x GPU’s that they had but they are now ready to release the second device based on the GK110 (technically the third) which they call their greatest accomplishment for the consumer market.

The Geforce GTX Titan is using a restricted GK110 core will all 6 ROP partitions enabled and the full 384bit memory bus enabled, but only 14 of the 15 SMXes enabled which means the Titan will be swinging with 2688 FP32 CUDA Cores and 896 FP64 CUDA cores. Fear not, for this is actually identical to the K20x NVIDIA is currently shipping, there isn’t any GK110 currently shipping that has all 15 enabled anyways and no one really knows why. The GTX 680 was able to clock as high as 1006mhz but this is not the case for the Geforce GTX Titan, since it is a much bigger GPU it has to be downclocked to a reasonable 837mhz with a boost clock of 876mhz which isn’t that much of a boost to be quite frank. However, NVIDIA were kind enough to slap on 6GB GDDR5 ram on the GPU making it very ideal for those with multi-monitor displays, this allows it to have more bandwidth then it knows what to do with and a whole lot more shading/compute and texturing performance.  Since Titan is based on a compute GPU, enthusiasts will also benefit from the extra horsepower as well without any of the limitations that prevented other GeForce GPU’s from reaching their full potential in an effort to protect NVIDIA’s Tesla line up.

GK110’s efficiency compared with other platforms

Titan makes use of the same form of cooler that it’s predecessor the GTX 690 and 590 had before it, showing off it’s luxury status  there is no piece of  plastic to be found on the card and NVIDIA has tried to make it as clean as possible, there is even a poly-carbonate window allowing you to see the heatsink to your heart’s content. Titan moves forward from the 4+2 power phase design that debuted on the GTX 680 and make use of a 6+2 power phase  design, a 6pin and an 8pin provides power for the card which allows for a total of 300w of capacity with Titan sitting comfortably with a TDP of 250w and a subtle overhead for extreme overclockers. In addition, Titan follows suit with precedence and has two DL-DVI ports, one HDMI port, and one full-size DisplayPort.

Front view of the Geforce GTX Titan

Nintendo Wii U Review

Intro

The Wii U is the next generation console that Nintendo have to offer against Microsoft and Sony, and has a bright future ahead of its console life and is expected to do quite well. There were worries about the console not having enough exclusives but games like Bayonetta and Scram Kitty are sure to keep the naysayers at bay for a while, until of course Nintendo begins to release the games that made Nintendo what it is in the first place. (The Legend of Zelda, Metroid and so on).  It’s predecessor, The Wii, set up the pillars of innovation which Nintendo would continue to be known for up until today when the Wii U now takes the Wii’s place. Although Nintendo isn’t finished with the Wii yet(Wii Mini).

The Wii U is said to be more powerful than the PS3 and Xbox 360 and when it’s hardware is exploited in the correct way by developers, we can expect to see wonders performed by the Wii U, for some it may be a selling point but for most people the innovation is something that will be catching the eyes of the crowd. Nintendo offers the Wii U in the form of the Basic Edition which has 8GB of memory and comes in white and the Pro Edition which has 32GB of memory and comes in black. It wouldn’t make sense to buy the white one unless your on a budget because the storage space is so small but it has been confirmed that you can expand the memory with an SD Card or an external hard drive. The Wii U is practically almost the same size as the original Wii which as a great achievement considering the hardware inside of it. The Wii U will also play Wii games but drops support for Gamecube games seeing as the console is quite old by today’s standards, I do expect to see popular Gamecube games released in the eShop though.

Controller

What Nintendo has decided to do this time is to implement a screen into the controller for something that may make up to be unusual gameplay, something like this isn’t to be unexpected from Nintendo since it’s mandate has been to focus on fun rather than all the fancy things the computers of the time can do, and that may very well be the case with the Wii U although it will help to take a closer look at what Nintendo seems to be offering.  The 6.2 inch screen is something that has been designed to work in coordination with your TV. It’s good to know that the controller is wireless and is the core of the system which makes it work(apart from the main console hardware itself). It has the buttons you would expect and two analog sticks which is commonplace in the gaming world and it even has NFC! Some may argue that it is over sized but the controller feels light and good in the hand, apart from the poor battery life which owners are all too aware of. If the controller is not for you, can always buy a Nintendo Pro controller which bears similarities to the Xbox 360 controller. An additional feature is being able to play the game on the gamepad independently from the TV although range may hinder the experience for some, people may call it common sense but I feel to suggest that this is a likable gimmick.  Unfortunately the Gamepad only comes on its own with no companion controller, which means only one Gamepad may be used at a given time.

Media and OS

Nintendo has decided to include various services into the package such as Netflix and Lovefilm and we expect to see more as the months pass by, we hope to see innovative displays of Twitter and Facebook soon but that might be asking for too much. There is also Miiverse which acts as a social gathering to discuss different games, similar to what Sony and Microsoft offers. Nintendo has also set up the necessary protection in place which forbids Children to easily access the Nintendo World which is something I always like to see. The OS UI is remarkably similar to the Wii’s OS using the same scheme of using different channels to provide channels(apps) for the user to use, I expect to see more of them coming into life as time goes by. Nintendo restricts the user from playing DVD’s which is somewhat of a shock, this effectively reduces it’s standing as a media device and more of a gaming device, but to be fair you couldn’t do this on the Wii either.

If you’re lucky enough to encounter an update after booting up the Wii U you just bought, you may be in for a very long wait because this particular patch enables a lot of the Wii U’s functions and without it, it simply wouldn’t work well, the UI speed certainly needs some more tuning as it is quite sluggish for comfort. Nintendo has finally introduced a viable online system worthy of notice which competes with PSN and Xbox Live, it basically offers the same functionally that you would expect from your PS3 or Xbox 360 and does a pretty good job at it as well. The eShop is a port from the 3DS version and is stocked with all the games you can currently get on Wii U at full price(well most of them anyways) and a decent amount of Indie games for Core gamers, you may have to tame your wallet as the prices are quite high in this luxury suite. The Wii U includes a Internet browser which does a satisfactory job of browsing the web in a simple manner,  everything you would expect from a desktop browser is included + the ability to view content on the Gamepad alone.

Conclusion

For those of you who believed that the Wii was the next big thing after sliced bread, then the Wii U surely won’t disappoint and will arise and meet your high expectations. Many of the Wii U’s shortcomings such as the lack of games will eventually be resolved with the frequent releases planned for the console and its potential will most likely be realised when developers take advantage of the additional game-pad and make new channels for the Wii U

NVIDIA’s Next Generation CUDA Compute Architecture: Kepler GK110

Due to all the hype surrounding the Geforce Titan, I think it would do everyone well to talk about the more technical aspects of the technology behind this product and to review and evaluate it.  Quoted by Nvidia to be the fastest and most efficient architecture ever built, the Kepler GK110 fills the spot that the GK106 was meant to replace but failed to do so. Everyone knows that the GK110 core was meant to be the flagship for Nvidia’s Geforce and Tesla line, but it apparently didn’t go to plan due to the low yields so Nvidia had to look for something else to cover up this tragedy.

The GK106 just so happened to fit the bill, but came at the price of delivering sub-par compute performance compared to the GCN HD7970. The chip managed to be defeated by the GF110 used by the GTX 580 in some tests which is extremely surprising to say the least, especially after making such a big deal of compute performance in the 500 series, although there were clear examples of sabotage on Nvidia’s part in a effort to protect their professional line, for example Nvidia limits 64-bit double-precision math to 1/24 of single precision effectively reducing GPGPU performance.

The reason why compute performance is so important is because these are the same Graphics cores used inside Nvidia’s professional line which means that if something was to poorly perform in a consumer oriented product then chances are the same thing will happen in professional oriented products. Although it’s not really a major issue since the professional line makes use of the GK110 core.  However, everyone knows that Nvidia will not allow itself to be willingly slaughtered by AMD when it comes to compute performance without fighting back, this is where the role and responsibility of the GK110 core comes in.

GK110: Overview

According to Nvidia, the Kepler GK110 comprises of 7.1 billion transistors and is also the most architecturally complex microprocessor ever built and was originally designed to be a compute powerhouse for Tesla and the HPC Market. The GK110 will also provide provide over 1 teraflop of double precision throughput with greater than 80% DGEMM efficiency versus 60-65% on the prior Fermi architecture, and as we all know, the power efficiency for Kepler is outstanding. The Kepler GK110 introduces new features such as increased GPU utilization and simplifying parallel program design something which be considered a healthy asset to many developers. A full Kepler GK110 implementation includes 15 SMX units and six 64-bit memory controllers, although not all products will use all of the SMX units, some products will use 13 to 14 SMX units.

The key features of the architecture includes…

  • The new SMX processor architecture 
  • An enhanced memory subsystem, offering additional caching capabilities, more bandwidth at each level of the hierarchy and a fully redesigned and substantially faster DRAM I/O implementation (expect to see an ARM processor handling this in Maxwell)
  • Hardware support throughout the design to enable new programming model capabilities.

The benefits of Dynamic Parallelism

One such feature is the Dynamic Parallelism feature which adds the capability for the GPU to generate new work for itself, synchronize on results and control the scheduling of the work without involving the CPU. Programmers can now take advantage of more varied kinds of parallel work and make the most efficient use of the GPU as the computation “evolves” and advances. This benefits the system by offloading work from the CPU and programs become more easier to create.

The benefits of Hyper-Q

Hyper-Q allows the system to use multiple CPU cores to work on a single GPU simultaneously which in turn increases GPU utilization and significantly reducing CPU idle times. What Hyper-Q basically does is increase the total number of active connections between the host and the GK110 by allowing up to 32 simultaneous, hardware managed connections. Applications that encountered false serialization across tasks which would limit the ideal percentage of GPU utilization can see an dramatic increase in performance without any changes to CUDA code.

NVIDIA GPUDirect

This is a capability that enables GPU’s within a single computer, or GPU’s in different servers to directly exchange data without needing to go to the CPU or Memory. It also reduces the demands on system memory bandwidth and frees the GPU DMA engines  for use by other CUDA task.  The GK110 also supports other GPUDirect  features including peer-to-peer and GPUDirect for Video.

————————————————————————-

Conclusion

The description and documentation of the GK110 available far surpasses what is been said in this article and it would not do the product justice if I listed everything here, so instead everything else will come in regular intervals. The fact that the GK110 being a compute monster has been already proven with many achievements appearing in recent news such as the sale of a cluster of K20x to China for a supercomputer. If the release of the Geforce Titan does come to pass, then we will be able to see the gaming performance of this card and whether it annihilates the competition or not.

Surface Pro Expansion Options – External Options – Part 1

ImageAs many of you Microsoft haters and fanboys would already know, the limitations of storage regarding the Surface Pro is quite something to think about. Those of you who have thoroughly done their homework on the Surface Pro have come to the understanding that Storage Space can be expanded with an SD Card OR you can buy an external hard drive to satisfy your Windows needs and if your wallet is really adventurous you can attempt to buy the exotic External SSD’s lingering around the web looking for a new owner. As exciting as this sounds, we all know about the woes of Storage Space and how much of an issue it can be, luckily I just happen to be one of the privileged few to own a 120GB SSD so I can tell you all you need to know in terms of future options. So I would recommend and urge the soon-to-be-64GB-Surface-Pro owners to pay extra attention to the following doctoral dissertation on Storage space which will come in two parts.

SD Cards

The image above is a high-capacity SD Card which is capable of a Read Speed of 60mb/s and Write Speed of 35mb/s, this is the kind of storage you would use in cameras and put into your laptop so you can edit the pictures, It’s really good for mobility purposes because it’s really small. On that note, it wouldn’t be the best kind of storage for the Surface Pro mainly because it would be really slow. Although it’s possible to get a much faster SD Card such as the Extreme Pro 64GB SDXC with a Read Speed and Write Speed of 95mb/s which costs a rather expensive £95 and you would just be bordering on traditional HDD territory, which isn’t much. It would probably be best to stay away from SD Cards and look towards to buying an External HDD/SSD although if you have one lying around it wouldn’t hurt to stick it in and see how it copes right?

External HDD/SSD

If a SD Card isn’t your cup of tea you can always reach out to the next best thing, a USB 3.0 External Hard Drive, a reputable product would be Western Digitals Elements External Hard Drive. Although this particular Hard Drive doesn’t utilise USB 3.0 technology, it has 7200RPM and is most likely way faster than your average SD Card, plus for the amount your spending you get more bang for your buck when buying an external hard drive seeing as it only costs £70.05 for 2TB which is quite a deal. This would be more preferable to those who don’t mind carrying something a little heavier than your traditional SD Card, it doesn’t seem to be quite big either so it shouldn’t present too much of a problem to those who want to buy it.

Cloud Storage 

Another way of storing additional data is using a cloud service such as Dropbox to store additional data that you may require, such as large zip files or numerous documents. Since the Surface Pro is a fully fledged Windows 8 tablet, there really is no limit to what you can use so uploading anything is possible. It’s also possible to upload large files into the cloud storage and put them on another computer, thus freeing up space. This option would be ideal for those who work on large projects, and require more then one computer to be able to complete that work.

Sony NEX-VG900 – The gap between Heaven and Earth, What DSLRs are to digital stills cameras

Sony’s NEX-VG900E Camera brings a new meaning to the word expensive, and is far superior to your average Handycam or Conventional DSLR. This camera is not for the faint-hearted as it costs £2,999, although you can get it for £1899.95 on SLRHut‘s website, a good bargain eyy? Although it’s likely the deal won’t last forever so get it while you can.

Sony-NEX-VG900E
Sony-NEX-VG900E

What’s so special about this Camera?

To start things off the NEX-VG900E has a mind-boggling 24.3MP resolution plus a sensor that’s a cool 40 times larger then the size in your average consumer camcorder, it also includes highly sought out features such as bokeh defocusing, tonal gradation and 24MP still photos which are exportable in RAW format. Just like any DSLR, the NEX-VG900E allows you to swap lenses in any way you please, the fact that it is also compatible with A-mount and E-mount supports the fact that this camera is one designed to allow you to reach your creative genius in as many ways possible. On that note the camera also includes a Mic Controller package with a Quad-Capsule Spatial Array microphone that has the capability for crystal clear 5.1 channel surround recording. Like most Handycam-styled cameras the NEX-VG900E benefits from Xtra Fine LCD 270 degree swivel display and highly detailed XGA OLED Tru-Finder.

7479fa87a990b535c26cde274c046829sony-nex-vg900e-7sony-NEX-VG900E-580-75

Microsoft finally introduces their x86 Tablet PC, the “Surface Pro”

 

Microsoft is gearing up to release the Surface Pro in the United States and Canada with a starting price of $899 on the 9th of February, translating loosely to around £565 without a Touch Cover keyboard which would usually cost an additionally £100. A British and European launch is expected to follow shortly after in the common weeks with Microsoft saying they will carry out a “phased approach” for the Surface Pro. A price of £699 is expected in the UK.

The Surface Pro is the older brother in tandem of the Surface RT, which has watered down specifications compared to the Surface Pro. The Pro version of Microsoft’s tablet contains a far more powerful x86 processor than what you might find in your conventional iOS and Android tablet and runs the full-blown version of Windows 8, making it a favorite in regards to its potential for increased productivity and mobility, however it is also thicker and heavier and has significantly shorter battery life of approximately four and a half hours. In comparison, the Surface RT employs a power-efficient Tegra 3 ARM-based processor running a less capable version of Windows 8 and depends on the App Store to offer applications to be used by the consumer.

Microsoft's Surface Pro
Microsoft’s Surface Pro

The Surface Pro is what Businesses should be and would be looking into in regards to their independent needs. Due to this own specific fact, the Surface is something that can be considered an innovation in its own right mainly because it can function as a tablet and a fully functional PC, which in theory far surpasses the capabilities of any other tablet on the market. Many enthusiasts have been making the suggestion that coupled with Intel’s newest architecture, Haswell (which offers increased performance, battery life and gaming performance) and the Surface Pro, the union would be a technological marvel added with the declaration that the so-called “tablet” would know no peers in regards to it’s perceived technical capabilities and it’s use in the business world. However, in reality (as always) price becomes a factor that all individuals must adhere to and carefully assessed before proceeding to decide the benefits outweigh the costs. Fortunately, although the device may be expensive for the average consumers, businesses will most likely be able to shoulder the cost or more specifically large corporations as it were. If Microsoft’s dominance of the Business sector is anything to go by, then it would be safe that the Surface Pro would be quickly adopted.

Be that as it may, it would not be far-fetched to assume that there really is no need for the Surface Pro in the corporate world since the device that the Surface Pro is trying to replace is the ultra-book or laptop. And it certainly doesn’t help that Microsoft did not secure the i7 chip instead of the i5 which doesn’t have hyper-threading, something many people would deem useful within the right circumstances and the fact that Windows 8 doesn’t seem to be resonating within potential customers. Based on these assumptions it would be safe to assume that the Surface Pro is a “luxury device“, a device which has its own merits but individuals (especially businesses) can live without. Although, I also see the potential for this to be an non-issue, given the fact that Surface Pro truly is a PC; it’s array of capabilities include using a 4G wireless dongle, video conferencing, the use of an independent Mouse and Keyboard, a USB Slot, a fully capable stylus, a SD Slot, a Mini-DisplayPort… the list is endless.