#Free #Education – a click away, literally.

http://www2.pcmag.com/media/images/ 331925-10-great -free-online- education-resources.jpg ?thumb=y

http://www2.pcmag.com/media/images/ 331925-10-great -free-online- education-resources.jpg ?thumb=y

We live in a very competitive age. From resources to jobs — and jobs to money — everyone is competing for it in one way or another. Once we grasp a career, most of the money we earn is dependent on what kind of education we gathered throughout our academic career (that piece of paper, i.e: degree, diploma, certificate etc).

Not everyone gets to afford or attain all levels of education or knowledge – so the question arises – is there a possibility for such individuals, or the society in general, to obtain similar information at ease — and for free?

Thanks to technology and multiple levels of investment from academic institutions — the answer is yes, and… no.

The good news is: you can now learn all content taught in a (for example) Liberal Arts Degree online; for free – but, by learning the content yourself. Detailed lessons, powerpoint presentations, assignments, syllabus, book references, and tests are posted on participating university’s website – you can download them as one pre-packaged course and learn it yourself. Some universities are better than others with video demonstrations – but this approach is more of a ‘free viewership of an archived class’ – available to everyone.

The bad news is: you can’t really “earn” an official degree/diploma/certificate by learning these courses yourself. You can learn the content and probably can do a better job than the prof who designed the course, but you can’t really get that ‘paper’ without paying the university. The main point of this approach is to have content available – so that at least information is accessible by all; meanwhile, regulation and recognition of the content will be the university’s responsibility. Universities will provide official paperwork to those who successfully go through their exclusive system — that’s how they get paid.

If your company doesn’t care about a piece of paper and appreciates application of knowledge through exposure of information – you might as well get down to studying these courses. The courses span from engineering to biology and other desired fields.

Check out this link for a list of online websites offering free education (Some highlights: MIT, iTunes, Khan Academy, etc.)

Curiosity leads to another question — should one even bother attaining free education content provided with assignment and tests – prepared by professionals?
— I say, what’s stopping you? Information is knowledge – knowledge is power, and power is change — and change is always good. Go for it, I know I am.

Source#1: Gizmodo


2012-2013 Flagship #SmartPhones – Confused?

http://img.talkandroid.com/uploads/2012/11/stuff-wine -smartphone- guide.jpg

http://img.talkandroid.com/uploads/2012/11/stuff-wine -smartphone- guide.jpg

The saturated and addicting world of smartphones.

And no, don’t blame Android for it – having options is better for everyone. They bring market prices down and creates a competitive market for greater innovation. Just look at the varieties available: waterproof shields, NFC ‘bump’ abilities, megpixels, wireless charging – there’s a unique flavour and niche in every device out there today that can be the deciding factor for your money and choice.

Yes, Apple created spotlight, but somehow and somewhere, there was an opportunity which they clearly missed out on. Hence, the world of Android smartphones are all around us. Apart from the Droids and Sheeps – we can thank Nokia (Lumia 920) and BlackBerry (Z10/Q10) for remaining alive and offering a whole new blend of flavors within the same market. Take a look at the flagship devices that are available today — with specs, which is basically the best way to compare nowadays:


(Click to expand)

Now – with so many options available, where should you step in? What phone should you choose? It all depends on your taste and the peers around you. That’s the realistic picture. For a simplified version of the table above – here’s the niche:

Sony Experia Z (Android) – waterproof (quad-core)
HTC One (Android) – UltraPixel camera (quad-core)
Samsung Galaxy S3 (Android) – MultiWindow (real multi tasking), and other TouchWiz interface wonders (quad or dual-core)
Nokia Lumia 920 (Windows8) – wireless charging, 13 MGPXL camera, unique screen, new UI (dual-core)
Apple iPhone5 (iOS) – the hype, resale value, tons of apps, but same interface as usual (dual core)
Blackberry Z10 (BB10) – completely new OS with boasting features, Android app portability (you can convert any Android app you want, even yourself – but if any bugs are found – no support from dev unless an app is officially ported), BBM (dual-core)

The dreadful picture is – there’s no easy way to pick and choose. All platforms are so coherently designed, that one can easily get accustomed to each interface – regardless of apps, specs, or hardware.

Majority of the hype lies around availability of apps, even though in reality what matters is it’s integration with the OS and supplementary apps. And ofcourse, the crowd that supports/follows it. The feature that should matter the most in a device is it’s ability to share, connect, and efficiently drain battery. Unfortunately, that’s overlooked more than 90% of the times. It’s a known problem and reviewers skip it due to lack of solutions out there. Lack of awareness will not diminish the underlying issue; however, the hype seems to have dissipated for batteries.

Operating systems like BB10 has integrated a lot of these so-called “needed” apps into the OS itself – apps that haven’t been ported by developers. This is a perfect example of how having such apps available in the marketplace is not a necessity. Apps, primarily speaking, aren’t a real priority for some people. Where apps aren’t a priority – a tradeoff is made. Most of the tradeoff is drawn towards things like a multi-megapixel camera, pretty screens (ppi), weight, enterprise-level features and services, misc Abilities (NFC, Gyroscope, etc).

By default, where’s the unique experience? Windows8 and BB10.
By default, where’s the large amount of device choice? Android.
By default, which device has the best resale price? Apple.
By default, all devices have the same or competitive buyout prices.
By default, all devices and their processors are the same.
By default, all devices and their cameras are the same.
By default, all devices are light weight and carry the same battery capacity.
By default, all devices have apps that can pretty much do everything (except for a few things). iOS/Android have the biggest markets – but that doesnt mean others are empty.

What’s the best phone available right now – for you? Go through the questions above – and whatever tickles your pickle – cross it out, then go out there and shop.

In my opinion, Android makes sense – due to amount of choices. iOS only has one device. Windows8 has a few devices, but not enough. BBZ10 is also like iOS.

Source: NDTV Gadgets


Urine.. equals to 6 hours of electricity!


http://farm8.staticflickr.com/7266/ 8161674482_6afa443513_c.jpg


One person’s waste is another person’s treasure. Cliche’d much?

Once you hear what these girls in Africa came up with, you just might want to be cliched yourself. Three female students in Africa’s “Maker Faire” (Science Fair) generated 6 hours of electricity by using 1 litre of urine sample with home-made machinery and chemical reactions. Ofcourse, technical knowledge is required (chemistry) – and there are certain elements in the experiment that can be quite hazardous.

The process is not for the faint of heart, but if you are interested, go over to the MakerFaire website for detailed pictures and instructions.

If the students can do it, so can you – its all about learning and applying!


I always used to wonder if there was a way to re-use this waste that we constantly produce. It is, after all, a chemical reaction of an input — and whatever has an input always has an output. The reason why us humans are far more advanced than other species on this planet is due to the fact how we can come up with innovative ways to improve our lives, including our own waste for instance. This has to be one of the greatest projects ever made, I’m just surprised why the great people/corporations of the “1st world nations” never came up with something like this. Or maybe they did, but just never bothered to pursue it even further.

Urine is something every single human secretes from birth to their deathbed, constantly. I’m sure the other waste that we produce (in solid form) has a lot more nutrients in there that we can re-use for similar or better purposes.

Just imagine, no dependency on fossil fuels or any non-renewable resources. No need to pay high amounts to these corporations. Buy a simple waste-to-electricity converter — and re-fuel it by simply discharging your own waste. The idea might sound disgusting at first but wouldn’t the waste be BETTER off assisting us than harming the environment where it usually ends up going?! I’m sure even this process has a wasteful discharge but it is probably not as significant as current amounts that flows into the sewage system?! After waste has been passed into the generator, simply re-charge your devices/house/or-whatever-it-may-be, and voila you’re out the door doing something else.

Ofcourse, all dreams are imagined to be beautiful, but the main question (and a real one) we should ask ourselves: Will this tech move forward to the consumers any time soon? I highly doubt it. The incumbents surrounding the oil industry and consumerism are far too greedy and powerful to move people away from oil and oil-based products. Our communities are consumed by products that heavily depend on fossil-fuel dependent products, so its highly unlikely to change for now. Apart from the unproven controversies, there are other and bigger issues surrounding this imminent change. A lot of jobs surround these dependent industries; realistically, it would be disastrous for many companies – including our economies – to bare witness a sudden shift of resource consumption.

Nevertheless, I still have hope – that one day we will get rid of fossil fuel dependency and cave way to better and efficient ways of electricity reproduction — that is good for the economy, environment, and humans in general.

Source 1: Engadget
Source 2: MakerFaire


Evolution of network-TV shows: #Netflix – House Of Cards



Conventional TV (network / cable television) and their shows have flourished society’s greatest or worst stories for many decades. In a span of 20/40-odd minutes, episodic TV shows not only have summarized eventful situations, but ended up twisting and concluding them with customized ideas and cliffhangers.

Are time limitations truly necessary (20/40)? Does that really allow directors to be creative or expressive in their works? Most shows on network-TV end up with illogical twists and poor content. Because of such restrictions and enforced ideas, not all shows are really worth anyone’s time – nor any food for thought. That’s the reason why there a lot of “bad shows” and majority of them are just ‘fillers’ for content. An average show has to fall strictly within the allotted time, disabling content creativity of the director/editor/story-writer – just to categorize the show as “fit” for network television.

Some questions that need to be re-examined for TV-show approvals:

  • what constitutes as entertainment on TV?
  • how should a TV-show be “fit” for TV?
  • how should it impact society and young children in the long run?
  • what’s being learned?
  • what could be potential dangers of the idea being presented?

So, what does it really mean for a show to be “fit” for TV? Is it the content, is it the direction style, the pace, editing style, advertisements? Many TV shows are always just so eager to come up with plots or twists that they entirely miss out on showcasing a meaningful scene or brilliant idea that could have been shown otherwise.

TV shows should be about plots that are advancing at better or subtle manner; sort of like how real life works — without the rush or time-constraints of network-TV ads. Ofcourse, abundance of time is not applicable in all situations of life, but we all know how many people watch TV and are influenced so easily with its content, and showing them fabricated reality in a quicker way really gives way to negative life-learning processes. The content being shown becomes their reality, and eventually reality in-itself becomes a shocker leaning to depression. Network-TV shows and their plots should, in all honesty, accelerate in a manner that’s equivalent to entertainment’s cinematic cousin: Movies. At least in movies the ideas can be expanded or contracted without any intrusion.

There might be a dim light in this darkness of network-TV shows: Netflix. Netflix’ new show “House of Cards” wants to do the opposite of what network-TV shows have been doing for decades. By investing $100million of their own cash, the show so far been receiving positive reviews:

“In the end, House of Cards is a victory for Netflix. It may not be the greatest show on television — how likely was that to be the case with the company’s first try? – but it is a good show, and one that benefits significantly by being freed of the time and scheduling restrictions that television typically imposes.” (Source 2)

The approach is: create a custom-show with no advertisements in between. Release an entire season, and make it available to the masses. Give creativity an entirely new front. Why make people wait on a weekly basis for the plot to move forward? Or when it comes to instant access, why create a separate platform like DVD’s or Blue-Rays to be owned later on after an average of oh? a MILLION months. Why not have shows available to be viewed at all, by ANYONE, at any time — this has been the case for SEVERAL old and retro network-TV shows. It’s unfortunate, but it’s true.

Netflix has been on the forefrunt when it comes to breaking away the ice of conventional and expensive cable and network-TV operations. Not only has Netflix proven that the old business model is a failure, but has been successful enough to finally start experimenting on ideas and shows that actually make or *want* to make a better sense out of them.

I wish Netflix the best of luck with this initiative and I hope it follows suit with a whole new lineup of content. Netflix can go hand-in-hand with YouTube’s exclusive offerings, and I’m glad not one cent of my personal wallet goes directly to any network-TV directly. Maybe a portion of it through Netflix, but even then, that’s the price that the shows deserve.

I’ll be watching the entire season very soon myself – you should also give it a try.

Source1: Gizmodo
Source2: TheWired


WiFi networks.. 2013 and beyond.

http://cdn.arstechnica.net/wp-content/uploads/2012/01/flash-wifi- hero- 4f0731f-intro.jpg

http://cdn.arstechnica.net/wp-content/uploads/2012/01/flash-wifi- hero- 4f0731f-intro.jpg

Many of us refer to WiFi as the technology that connects us wirelessly. Ofcourse it doesnt take a genius to make that assumption – since the answer to the above is so simple – why not think a step further? Do you ever wonder how much farther it can go? How much faster it can be? Where the technology is headed!?!

This post isn’t related to your organs communicating with devices or anything – so no need to get extra excited. What it will entail, however, is the fact how engineers and companies are planning to improve existing in-home WiFi technologies to work faster, with a lot more efficiency.

Presently, most devices and routers are compatible with 802.11n WiFi standard; running at frequencies 2.4GHz and 5GHz — giving an approximate bandwidth output of 450Mbps. The next upgrade is 802.11ac; which will exclusively run on 5GHZ frequency only and is expected to give theoretical speeds up to 1Gbps (gigabit per second, extremely fast).

Future of WiFi networks is already being laid out. The next iteration is going to be called 802.11ad; utilizing 60GHz spectrum with a theorized output of 6Gbps. The major difference in ad against its predecessors is that this connectivity is limited to in-room connectivity only. Meaning, devices are to be connected wirelessly in a close-range; basic usage would include “wireless docking, file synchronization and backup, and content sharing and streaming among multiple devices and displays with everything running fast enough that even gaming or HD video will work well” (Source#1). The expected arrival and approval of this iteration is around the year 2014+.

What’s interesting is that along with these WiFi-specific (standard) upgrades there are also other initiatives in place that might change the way we connect and communicate in the future.

For example: PassPoint. This tech will allow wireless devices to authenticate and connect to WiFi hotspots instead of connecting to mobile towers (base stations) without extra hardware i.e: SIM cards. This will free up our dependency on open frequencies that could be efficiently used for other purposes rather than chit chat and data exchange. If this takes traction, you can say goodbye to the concept of limited data usage by then. In my humble opinion, I see it as a a highly unlikely thing to happen because the amount of money wireless carriers make on data overages is enormous. They all work to make shareholders happy – not you, so forget about this one (unless a miracle happens).

Another example: Voice-Enterprise. This initiative aims to provide higher quality of service (QoS) to voice calls but specifically to the enterprise community (businesses and organizations). The major benefit of Voice-Enterprise is they are creating roaming-access-points, which will enable anyone to travel across the world without interruption to voice or data services.

A similar example for TV and entertainment medium: Miracast. A technology being developed to sync devices directly to their TVs without interfering with other frequencies. By using a new protocol named TDLS (Tunneled Direct Link Setup), the language and highway will free up space that can be used in two ways: TCP/UDP/RDP for streaming of data while TDLS for streaming media content over Wi-Fi.

Networks are on the verge of becoming a lot more sophisticated but faster and efficient in the years to come. It’s better to know what’s ahead than what’s not. As wired networks became faster over the years people believed they’d never be able to replace wireless technologies. Well, I beg to differ. Rest assured, if you’re a fan of wireless technology – you ain’t seen nothing yet.

Source#1: PCMag


Mobile device for the visually impaired: Project #Ray



Technology, nowadays, is a blend of learned responses through visual cues and touch-screens. Most of us are lucky enough to see and visualize all these wonderful benefits, but what about those that are less fortunate in the visual category? What about the visually impaired? How can they make use of cell phones and technology where everything that’s given as an option requires one to have a sight or vision?

Introducing: Project Ray. (Model RAY-G300)
– a touch-screen smartphone for the visually impaired.

The company behind the G300 is known as “Ray”, which is a new startup firm backed by Qualcomm. They’ve developed a 4″ WVGA screen GSM device, with a tweaked version of Android 2.3 (Gingerbread). Equipped with 1GHZ CPU (single core), 5 MEGAPIXEL rear-facing camera, 4GB memory, microSD card slot, microUSB charging port, headphone port, WiFi 802.11 b/g/n, GPS, and 1350mAH battery.

There are several benefits as to why it outshines other devices. its designed to learn user patterns and apply changes accordingly. On top of self-learning, it ‘talks’ to the user on every action or move; whether it be: reading a magazine, automatic backup and restore of data, easy access to panic and emergency services, reading out GPS location, etc (sort of how the Accessibility feature works on most smartphones). The biggest benefit is its custom made user-interface, allowing easy hand gestures and movements with maximum functionality and feedback from the device.

The phone itself costs $750USD for now on their website and is available with free shipping.

I honestly think most devices nowadays are aimed at people who can see. This is the only initiative which is solely targeting the visually impaired, which I must say is inspirational. One company that’s actually working for a good cause. I wish this company the best of luck and success, and ofcourse thanks to Android for being free and customizable, so that it can be used by ‘literally’: everyone.

Source Article: Engadget
Source#1: Ray


Theoretical wireless speeds > Real Wireless speeds. Why, and is there any hope?

http://thumbs.dreamstime.com/thumblarge _398 /12422151 96Xu9s2w.jpg

http://thumbs.dreamstime.com/thumblarge _398 /12422151 96Xu9s2w.jpg

So you got yourself a new cell phone, with LTE service. Wireless providers will brightly advertise theoretical speeds that can be achieved on their networks, while in an ineligible font sized statement (the fine print), the *actual* expected speed.

For example ROGERS (canadian wireless incumbent) states:

The Rogers LTE network is capable of maximum theoretical download speeds of up to 150 Mbps*. Typical download speeds today range from 12 to 25 Mbps for most devices, or even up to 40 Mbps* for selected devices.

*The fine print*: Actual experienced speeds depend on the network spectrum and technical specifications of the device used and may vary based on topography and environmental conditions, network congestion and other factors.

So 150Mbps vs 40Mbps (and that’s me being generous).

A difference of 110Mbps, that’s HUGE!

The problem most wireless networks face (whether it be WiFi, or LTE) are “lost” data packets. Wireless networks transfer information in data-packets, which are digitally layered envelopes of information. Wireless data-packets are able to penetrate objects or walls quite easily, but sometimes the distance or ‘path’ a packet takes to reach its destination can be quite windy, stormy, or difficult altogether (filled with obstructions). Since its traveling in thin air, lots of information can become lost and be received as incomplete. The ‘transmitting’ device will then get a request to re-transmit the lost data. The process of re-transmitting the lost data ‘creates’ congestion (because a device, which is already overwhelmed with requests and is constantly sending data, will have to re-prioritize and re-send the already-sent data packet – they may be machines but they aren’t perfect). Congestion is the sole reason why theoretical speeds can never match real-world speeds.

Solution? Many companies acquire bandwidth frequencies and heavily invest in hardware upgrades to manage these issues. While such investments provide long-term business benefit and coverage, they haven’t curbed the negative implications of the *lost* data-packet and congestion phenomena. The problem lies in the overlying algorithm that takes place in exchanging information (on ‘transmitting’ and receiving devices, whether they are routers, base stations, cell phones etc.)

MIT, however, has replied with: Algebra.

They designed an algorithm that makes the ‘receiving device’ automatically figure out the ‘lost contents’ instead of requesting a re-transmission!

Typical packet loss is around 5-10% (which significantly affects a data packet quality). According to their tests:
On 2% packet loss (with new algorithm): the transmission received a boost of 16Mbps!
On 5% packet loss (with new algorithm): the transmission received a boost of 13.5Mbps [usually experienced in cars or moving trains].

Basically, they removed the concept of a data packet, and replaced it with data-equations. As a device sends an equation, even if parts of the equation are lost along the way, the receiving-end device can figure out the equation with the designed algorithm. Brilliant stuff.

It sounds too good to be true, but many companies have already licensed this algorithm from MIT, which probably means it works. Due to non-disclosure agreements, no one really knows who has it yet.

By replacing data packets with data equations, this will eventually delete the concept of packet re-prioritization — removing congestion and eventually achieving speeds closer to theoretical speeds, transforming theoretical to actual!

I’m excited!

Source Article: Engadget
Source#1: Heavy-Reading
Source#2: Android-Authority
Source#3: Technology Review


The brain remains highly active, even while you sleep.



All of us sleep, and according to conventional wisdom – when people sleep, their entire body DIES (including the brain)! The word “dying” is an exaggeration and the latter is simply not true; however, I’m sure not many of us are aware of what actually happens in our brains when it comes to memory transfer, trace, and storage during sleep. Or in fancier terms: memory consolidation.

Let’s lay out some ground rules first — definitions:

Sleep is a naturally recurring state characterized by reduced or absent consciousness, relatively suspended sensory activity, and inactivity of nearly all voluntary muscles.
(Source: http://en.wikipedia.org/wiki/Sleep)

Memory consolidation is a category of processes that stabilize a memory trace after the initial acquisition.[1] Consolidation is distinguished into two specific processes, synaptic consolidation, which occurs within the first few hours after learning, and systems consolidation, where hippocampus-dependent memories become independent of the hippocampus over a period of weeks to years. (Source: http://en.wikipedia.org/wiki/Memory_consolidation)

According to the source article, researchers in UCLA concluded that during sleep, a brain is constantly: learning, adopting, storing, and analyzing (in short, memory consolidation) — even during anesthesia-influenced trials. This discovery breaks the previously learned norms about memory consolidation. Sleep was seen as a “Relatively suspended sensory activity”, and that’s not the case anymore.

Neocortex (outer brain aka new brain): involved with sensory perception, generation of motor commands, spatial reasoning, conscious thought and language.
Hippocampus (inner brain aka old brain): consolidation of information from short-term memory to long-term memory and spatial navigation.
cortex (intermediary brain aka middle brain): functions as a hub in a widespread network for memory and navigation. It is the main interface between the hippocampus and neocortex.


When we’re all in our conscious state (not sleeping), the ‘middle brain’ is constantly used as a quick storage medium (call it, a clipboard… storing temporary data for instant data retrieval, copy-paste function on computers), and is always active. It was assumed that this heavily used medium would rest or have absolutely no role during unconsciousness, but thats actually not true (anymore).

It had been shown previously that the neocortex (new brain) and the hippocampus (old brain) “talk” to each other during sleep, and it is believed that this conversation plays a critical role in establishing memories, or memory consolidation. However, no one was able to interpret the conversation.


The findings challenge theories of brain communication during sleep, in which the hippocampus (old brain) is expected to talk to, or drive, the neocortex (new brain).

Here’s what actually happens while you’re sleeping. New brain (which should also be sleeping and retains day-to-day memories) sends consistent signals to the middle brain (at a much lower pace). While the middle brain is constantly working to process and eventually transfer information into the old brain (hippocampus) for storage purposes. Thanks to the research team’s sensitive monitoring system, they were able to translate what the brain components were actually transferring between each other! This monitoring system was sophisticated enough to monitor all three parts of the brain simultaneously. It even translated data from the neurons that ‘seemed to be an inactive state’.

The whole point of this discovery was to point out two things:

1) The brain is a complex little organ, its sophistication and complexity still amazes me. No matter how much information we discover, there are always some hidden pathways we miss out on. Completely brilliant.

Mehta theorizes that this process occurs during sleep as a way to unclutter memories and delete information that was processed during the day but is irrelevant. This results in the important memories becoming more salient and readily accessible. Notably, Alzheimer’s disease starts in the entorhinal cortex and patients have impaired sleep, so Mehta’s findings may have implications in that arena.

2) The study can lay out a better foundation for discovering brain and performance related deficiencies. Since sleeping and non-sleeping brain activity is (now) highly similar, the path to possible solutions can be simplified. This will help scientists discover whether behaviours can have an adverse impact on memory consolidation, due to how we sleep or live.

Source Article: Gizmodo


2012 Canadian Copyright Law (Bill C-11): What does it mean for you?

http://memecrunch.com/meme /2U6X/dowloads- copyright-music- to-ipod/ image.png

Don’t be scared by the picture above; let’s talk about what some of us might and might not know about, The Canadian Copyright Law (Modernization Act: Bill C-11).

Most of the developed countries use several forms of legislation to protect people and their creative works. One such ruling comes in a form of a copyright law. A ‘federal statute’ (governmental paper-based or written authority) known as “Copyright Act of Canada” is the governing body that protects copyright laws within Canada [WikiPedia]. There have been several ‘revisions’ of the copyright law which came in the form of bills. Bill C-11 is the latest revision which updates or ‘modernizes’ the current set of laws to protect consumers and copyright holders simultaneously.

All provisions about the the copyright law can be read here [WikiPedia].

What does it mean for DOWNLOADING and YOU (after the bill has passed)?

What’s the main point of all this?
– to stop people from downloading through torrents and ‘seeding’ files. Commercial liabilities are being added for people who sell illegal copies of these downloads in stores, or privately.

Can I download anything I want?
– Yeah, but it all comes down to how you are downloading and what you are doing with it. There are several ways to download or share files these days: torrents, file-sharing websites, newsfeeds, WaRez, VPN (proxy) etc. Torrents can be easily tracked and monitored unless you protect yourself through multiple VPN (proxy) variants [in many instances, even they are traceable]. If you download a copyrighted file, you are required to delete it within a specific amount of time, and not share it for commercial purposes (i.e: sell or re-distribute without owner’s consent). If its a file that had a ‘digital lock’ on it, then its copyright holder is protected. Breaking this ‘digital lock’ is now considered ILLEGAL – the one who started the distribution and broke the lock will be held liable along with those who ‘seeded’ even bits or pieces of the file (happens via torrents).

So, who is specifically liable (responsible) and who isn’t?
– Person(s) doing illegal distribution of the copyrighted material, and commercializing on those assets — will be held liable. It could be an individual(s) or a company(s). Your Internet Service Provider (ISP), Search Engine (like Google, bing, etc), or cloud-computing storage websites (RapidShare, Google Drive, SkyDrive, Dropbox) will not be held liable for influencing infringement of content, however, they are liable to correspond with court orders to give out information on IP addresses/accounts – that would help the process of litigation on the people involved with illegal distribution.

What will happen if I’m accused of torrenting and downloading/sharing a copyrighted file?
– This is a step by step process.

  • A copyright holder would send notification to the ISP or search engine in a prescribed format identifying an electronic location to which a claimed infringement relates.
  • The ISP would forward this notification to the subscriber (person at the electronic location, aka you).
  • The ISP or search engine must store the subscriber’s IP information for 6 months, or a year if a court action results from the infringement.
  • The ISP may be liable to statutory damages of $5,000 to $10,000 if it fails to comply with these provisions.
    [Source: UBC]

We still don’t know the length of these proceedings, who else could be involved, what are the consumer rights or how consumers can get help. What we do know is what ‘criteria’ would be used to determine that a person is at fault — each situation will be evaluated on a case by case scenario, and will go through the ‘6 step criteria’:
the purpose of the dealing; the nature of the dealing; the amount of the dealing; alternatives to the dealing; the nature of the work; and, the effect of the dealing on the work.

What are the penalties or monetary damages?
– “Statutory damages for copyright infringements with non-commercial purposes have been reduced from the current $500 to $20,000 per work infringed, to $100 to $5,000 for all infringements in a single proceeding for all works (not for each work infringed). The current range continues to apply to cases of infringement for commercial purposes only.”
[Source: UBC]

Is there any good news for consumers?
– Not a lot, but here are some rights given to consumers as of this act:

  • You could splice (cut/copy) scenes from copyrighted videos and movie trailers to create a fan-made trailer, research, critique, review, or video and distribute it on the Internet
    … that is also subject to certain conditions:
    e.g. the source and author HAS to be identified, legality of the original work or the copy used, absence of a substantial adverse effect on the exploitation of the original work.
  • You could copy a song purchased from iTunes from your computer to your iPod.
    … that is also subject to certain conditions:
    e.g: you cannot reproduce a copy for private use on CD-Rs and Mini-Discs. It has to stay within one “audio medium” to another; meaning a device that plays music (computer) to another device that plays music (iPod). CD or Mini-discs are ‘received’ as inputs by media players. (wow, unbelievable lol)
  • You can record a show on your PVR to watch at a later time.
    … is also subject to exceptions:
    e.g: You cannot record on-demand service, or to works protected by digital locks.
  • You can take a photograph and be deemed as its author. Prior to this ruling, company (or person) who “commissioned” the rule was deemed as an author.
    [Source: UBC & Mondaq]

Any other stakeholders?
– The focus has been primarily on education institutions, copyright holders, and specific rights provided to consumers – all details can be read through the sources below. But the information laid out above is pretty much what you need to know.

A bit of my rant now. In one way some of these rulings are completely absurd which totally harm the way things are shared online. But in realistic terms, I don’t think companies will go after perpetrators who are involved in petty sharing amongst friends etc. It’s only when things go out of hand (creating illegal sharing networks, selling pirated copies and monetizing on those factors) – that’s the only time when these people will be subject to court visits. But it should make you wonder “WHY” people start pirating in the first place? Maybe ‘consumable media’ is a bit too expensive and needs to be re-thought out? NO one focuses on that, all they want to focus on is their deep pockets and the rights they can enforce on normal consumers like us.

Source: CDMN
Source: Wikipedia
Source: PARL (actual bill)
Source: HuffingtonPost
Source: UBC
Source: Mondaq


Artificial Intelligence attempt: Upload a Bee’s brain into a Robot.


Engineers and researchers in UK are about to turn another crazy sci-fi idea into reality. Bionic Bees.

Scientists have always fancied the idea of Artificial Intelligence, but many have either failed to create one or just found it too difficult to program one – since sophisticated animals were used as test subjects to be replicated.

These folks used a different approach: use simpler ‘beings’ like insects, who carry a lot less set of instructions within them.

The idea is to upload a bee’s brain into a robot. With limited amount of instructions, they are aiming their robots to work on ‘instinct’ rather than programmed algorithms. The core components of the brain consists of uploading algorithms designed around sense of sight and smell, which would be required to act like a normal insect and be accepted by the environment.

Afterwards, researchers want to use these mini robots for time-critical missions (like searches, rescues, etc) as well as one day eventually replacing conventional bees (which might be close to extinction? not sure about that one).

The project is called “Green Brain” – funded by IBM and U.K.’s Engineering and Physical Sciences Research Council — for approximately $1.6 MILLION USD. We can expect to see buzzing bees by 2015.

Anyway, the point is to gain an upper hand on A.I research by creating models presented by these soon-to-be ‘natural robots’. If you think about it deeply (and ofcourse believe in such things), maybe all life on this planet was also developed this way by the ‘Creator of all things’ – slowly but surely — in small steps.

Source: CNET