Category: Uncategorized

Eatel Benefits six exceptional high school seniors Using laptop computers

Six leading seniors from Ascension and Livingston Parish high schools were honored on April 25, 2017, at EATEL’s annual Technology Awards Ceremony, now in its 31st year old. Focusing on the significance of both technology and academic accomplishment, EATEL presented one standout graduating senior from each high school in its serving area, Donaldsonville, Dutchtown, East Ascension, French Settlement, Maurepas, along with St. Amant, with a touch-screen, ultra-portable laptop computer. The ceremony was held in

Read More

Transforming The Computer’s Fan Speed To Separate Info From Air-Gapped Computers / robokoboto

Hacking is a process that’s been going on for so long as computers have been around, and with technology becoming more and more complex these days, so are hackers.  A brand new malware has been released which simplifies privacy by stealing info from the one thing that people thought was fail-safe — the air-gap.

An air-gap is a security feature that’s enabled if a computer is dispersed into other computers or networks such as the internet.  Its title describes the way that it works in that it literally makes a gap of atmosphere to make sure the unit is physically isolated.  Once regarded as a very effective way of offering protection against hackers, a malware called ‘Fansmitter’ has been created that takes over control of the computer fan to modify it turning rate and restrain the sounds it generates.  

The manner in which the malware functions is by simply transmitting information to the computer which consists of a preamble and a payload.  The preamble can be employed as a listening device, while the payload encodes the information to be transmitted, able to picked up with a smartphone or other nearby listening device. To be able to conceal the noise change from the fans, researchers used low frequencies which were difficult for people to detect.  

Through testing the study staff in Ben-Gurion University in Israel, led by Mordechai Guri, managed to able to transmit information from an air-gapped pc into a smartphone at precisely the exact same room, demonstrating you don’t need a microphone to be connected to the computer to hack on it.  

Though this may seem a little worrying, additional studies are moving on into the staff to try and fight this situation.  Some chances in ways to do that include computers generating so much background noise it would not be possible to record anything or by replacing current fans with ones that are silent.  However, whatever techniques they try, there’ll always be somebody out there seeking to attain information unlawfully and will keep making ways to do it.  Regrettably, if something could be made, it may also be broken.


Reactive programming and mvc

Reactive programming along with mvc

Immersive programming demos are impressive. Two of my favorites are:

It is surprising that such succinct code can accomplish such cluttered tasks with few abstractions. However, if the elegance combines with unfamiliarity, it’s easy to overlook that intellect we have discovered through hard years on the front lines of app development:

  • Parts Have to Be composable, decoupled, and testable
  • isolate your program state
  • don’t replicate yourself

Reactive programming should be along with the design patterns we all know by mvc. While those small demos seem to imply that immersive programming supplants mvc, the two are quite different. Reactive programming proposes that a small set of primitives for handling state with information flow, whereas mvc separates program issues.

In a fantastic mvc app the condition is from the version, so let’s examine how we could utilize reactive programming to your “M” and keep that different from your “V”.


We’ll start with an example of a simon-says game, executed in , but without any separation of issues.

The game will show the user a sequence of numbers and a set of buttons that are numbered. The participant pushes the buttons at the order of the figures. They acquire if they get them correct, otherwise it is a reduction. They could restart the game at any time. Here is a good example:

I know, I am horrible at making games.

The code are available in , but I’ve pulled the relevant portion below. It reads top to base, therefore I’ve just clarified the code inline.

// this is a flow of click events
Var newGameClicks = Rx.Observable.fromEvent($newGameButton, 'click on');

// this ends up being a flow of game outcome, which emits each time a game
// completes, but the road to get there's somewhat complex, so it is clarified
// below
  // create an array of numbers on each click

  // use the array of numbers since the information to leave the game and pass it through

    // for each array of numbers, go back an event flow of button-clicks
    yield Rx.Observable.fromEvent($game.find('.number-buttons'), 'click on')

      // on every click, provided that the former state, we pass along the new
      // 'gameState', which comprises 2 peices of information:
      // - the original order
      // - that the switches pressed far
      // so after this call we get a flow of game-states which elicits each
      // time that the user clicks a few button
      .scan(order: order, pushed : [], work(state, event) 
        var val = +(;
          order: state.order,
          pushed: state.pressed.concat([val])

      // we've got 2 conclusion conditions:
      // - incorrect button press
      // - that the participant got them correct
      // therefore we make the flow of game-states finish on either of the preceding
      // conditions.  Note that this uses a helper function characterized in lib/common. Js
      // that acts exactly the same as RxJS's 'takeWhile', but it includes the last
      // thing.
        var prefix = state.order.slice(0, state.pressed.length);
        return _. IsEqual(prefix, state.pressed);

      // we only want to bother ourselves with the state of objects when the
      // game finishes, so this yields a flow of only 1 game-state, which
      // dissipates on the click

      // finish the flow if the user requests a brand new game.  This still
      // yields a flow of only 1 game-state

  // now we have got a flow of streams of solitary game-states, comparable to some
  // nested variety of objects '[[{}], [{}], ...]'.  We really only want a
  // optimally flow of the final game-states, so the phone below takes the
  // *internal* streams
  . ConcatAll()


  // at RxJS, those event streams aren't active if you telephone something like
  // subscribe or forEach

True to form this implements the match succinctly using common immersive programming idioms, all built on one abstraction: the event flow (or even Observer from RxJS parlance). It does not have the familiar looking class or view definitions we are utilised to seeing in popular mvc frameworks, but it’s got all those side-effect completely free functions… surely this code that would exude a nod of approval from functional purists, right? This code does not feel right though. It is mixing our application’s data model (the order of the figures, the results of the games) with displaying the information.

It is the jquery spaghetti of functional javascript.

What is wrong

We want to have the ability to launch this app without altering the preceding code, but it’s easy to consider features that will trip us up:

  • Adding keyboard shortcuts to get pushing the number buttons
  • reporting scores to some backend on wins/losses
  • introducing different ways to start a new sport, for instance by navigating via pushState

More commonly, there are questions we could ask as a litmus test of whether our UI parts are maintainable:

  • Will we create many views that act on the same data?
  • Are those views conducive to device testing?
  • Can we automate the control of this app out of the dev tools console?

The code above does not meet any of them.

The underlying information model

The underlying information model of this program could look something like this:

  • A flow of “new games”, each of which are represented by a Range of numbers that the player needs to fit
  • a flow of results, which include both the original order and the switches that the participant pressed

The new game flow can not only be hard-coded to emerge from one button clicks though. We need a degree of indirection at which any event flow can be connected or disconnected in the brand new game flow dynamically.

A decoupled strategy

To achieve this we will have to present a new abstraction, something RxJS calls for a Theme that consumes and generates events.

Var subject = new Rx.Subject();
// => 'foo'

We must create arrays of arbitrary numbers, however, regardless of what is passed into the onNext telephone number.

Var subject = new Rx.Subject()
  . Register(console.log);
// => [1, 3, 1, 0]

Now, however, the issue is how map yields a new observable, and we all need one thing that absorbs the onNext calls and also generates the arrays.

Var subject = new Rx.Subject();
Var observable =;

var newGameStream = Rx.Subject.create(subject(visible);
// => [2, 2, 0, 3]

There’s one more subtlety. The observable factor will generate a new value for every subscription, perhaps not onNext call. Which means different observers would acquire unique arrays of worth, defeating the purpose of using a Theme at the first location.

NewGameStream.subscribe(console.log.bind(console, 'next subscription:'-RRB-);
// => [ 0, 1, 3, 2]
// => next subscription: [3, 1, 0, 1]

In instances where the behaviour is deterministic, this would be safer, as it insulates us from inadvertently sharing information. However in this case we want every subscription to receive exactly the identical price, therefore we’ll share the Observable.

Var subject = new Rx.Subject();
Var observable =
var newGameStream = Rx.Subject.create(subject(visible);
newGameStream.subscribe(console.log.bind(console, 'next subscription:'-RRB-);
// => [2, 1, 0, 3]
// => next subscription: [2, 1, 0, 3]

A similar approach can be placed on the flow of results, permitting us to decouple the reactive data version from the views.

Tight views combined loosely

This layout enables simple views accountable for nothing beyond converting the information to DOM. The views are modular and testable, and the version does not have to be altered for each new attribute. We’re reaping the advantages of reactive programming while maintaining mvc’s separation of issues.

: there are various flavors of mvc, and this guide is chiefly concerned about the view and model, therefore I am utilizing “mvc” as a generic term.

: the idea for the sport comes directly from @lawnsea’s talk, which was kinda the inspiration for the post.

© aaron stacy 2014, all rights reserved

Using Heat to Power Computers

Ever since the invention of computers, engineers and researchers have been attempting to work out how to deal with the heat that they give out so as to avoid malfunction and sudden shut down.

Why try to decrease the output of heat if it may be utilized as a source of energy?A study duo in the University of Nebraska-Lincoln had developed a “thermal diode” which could withstand temperatures of up to 330 degrees Celsius. Image courtesy of University of Nebraska-Lincoln.

Rather than working on advanced heating technology or attempting to decrease the output of heat from the first position, two University of Nebraska-Lincoln engineers had flipped the problem upside down and created a new technique to place all of that unwanted renewable energy to good use.

“If you consider it, whatever you can do with energy you need to (also) be able to do with heat, as they are similar in a lot of ways,” said one of those researchers Sidy Ndao. “In principle, they are both energy carriers. If you could control warmth, you could use it to do computing and avoid the issue of overheating.”

In a paper, published recently in the journal Scientific Reports, Ndao and colleague Mahmoud Elzouka describe their nano-thermo-mechanical apparatus, known as a thermal diode, which is capable of operating in temperatures near 330 degrees Celsius (or 630 degrees Fahrenheit).

The ultimate aim is to make the apparatus — which Ndao requires for a “thermal computer”– immune to more than twice the heat (i.e., 700 degrees Celsius or even 1,300 degrees Fahrenheit), thereby opening the door to many distinct applications.

According to the writers, their invention “may be utilised in space exploration, such as investigating the center of the planet, for oil drilling, (such as) numerous programs. It may allow us to do research and process data in real time in places where we have not been able to do this before.”

It could also be employed in efforts to save energy — just as much as 60% of electricity generated from the United States goes up from the atmosphere as heating, which may, at least potentially, be used to power the the new apparatus

The logical next step will be to raise efficacy and demonstrate that the ability of this device to carry out computations and run a sense system experimentally.

Reasonable or not, nevertheless, Ndao dreams even bigger: “We wish to to produce the world’s first sustainable monitor,” he said. “Hopefully one day, it will be used to unlock the mysteries of outer space, explore and harvest our own planet’s deep-beneath-the-surface geology, also exploit waste heat for more efficient-energy utilization.”

this information or post

Featured information from related categories:


    In-Circuit ESP8266 Programming Tool

    Reprogramming A 13 Consumer Wifi Outlet

    nicely detailed instructable shows exactly how to cable a freestanding ESP-12 module for programming. I wasn’t going to perform surgery on the plank, but I’d discovered that in-circuit programming is frequently possible, so that I was hopeful.

    The next challenge was wiring that the processor for programming. One clear choice was to solder wires to the pads and utilize those. I used ton’t enjoy that strategy. For one thing, I am not very good at soldering. For another, the pads will be both very near one another and also (on one side of the processor) near the plastic housing. I wouldn’t have been able to solder the cables on that side without melting and burning the circumstance. In the end, if I managed to reprogram this device, I’d want to reprogram many of these, so I needed a non-destructive means to reprogram them quickly and reliably.

    I’d heard of something called a “bed of nails” programming device, which will be a distinctive testing or programming device consisting of a horizontal board with special spring-loaded pins sticking from it. Once a circuit board will be clamped down onto the bed of nails, the spring-loaded pins make contact with evaluation pads in the circuit board, permitting the user to program or examine the circuit[0]. I figured I would be in a position to 3D publish something which would allow me to utilize the pieces of breadboard cables and hookup cable I already had accessible as a basic bed of nails.

    This was my introduction to demanding the tolerances for electronice really are. I iterated via 12 different layouts for the bed of nails block. I thought that I could use breadboard cable that already had pins in the end. But between the varying phases of these pins and the necessity to angle the cables in a buff to accound for the thickness of the insulation, this failed fast. Next I started using single-strand hookup cable. Again, almost any design that relied on the period of this uninsulated tip of the cable proved too unreliable. Eventually I tried a design with two tiny holes for each cable, to ensure a loop in the base of the block could speak to the pad. Even this was too uneven for of the cables to make contact.

    A lot of many iterations

    Another difficulty I ran into was that the accuracy limits of my 3D printer. The holes for the cables needed to be around 1mm. The nozzle in my 3D printer is 0.4mm. In practice, it follows that the “slicer” application that produces the path for your 3D printer has a far larger effect on the eventual object than what is defined from the 3D version I try to publish. It was a process of trial and error right at the border of this printer’s abilities to determine the precise configurations that would do the job. For this job, I generated a profile of printer settings in the long run, featuring rates and a layer height. Using these configurations increased the time to publish a block that was little by a factor of approximately 12, just 5 minutes to over an hour, but gave the excess precision to me I needed.

    Printing time: 1 hour

    My try at programming before I gave up on hookup cable was with the cube which held miniature loops of cable. This at least came close enough to working that I managed to test the programming procedure. Unlike the development boards I was used to, that is programmed with a computer and a USB cable, the ESP-12 module lacks the special chip which converts the protocol. This usually means that you want an intermediate chip and you have to wire it with the processor that you need to program in to a circuit. This programming processor also allows you to listen to some messages that the processor might send out. There were several times when I managed to select signals on the serial line up when I reset the processor. It made me believe I had been on the ideal track, even though it wasn’t actual text. Additionally, I learned that the energy supplied by the programmer was insufficient to reliably power the processor, so that I needed to use my bench power supply to feed 3.3V to the ESP-12 and the programmer.

    Programmer processor

    ESP-12 links for programming. Labels are what the hooks should connect to, not what they are.

    When I failed and had attempted with each configuration I could think about I moved back to exploring layouts. It was then that I discovered “pogo hooks” which as its name implies are small, spring-loaded pins utilized in professiona bed of nails testers. They’re fairly cheap–I got 100 without seeming too difficult including transportation in the size that I needed.

    100 count pogo hooks

    It had been two weeks after I had started this project, after the pogo pins arrived, and that I was gradually ramping up the seriousness of which I had been approaching the design. At the beginning I had been focused on discovering wins, and learing a good deal about the circuit design and demands in a brief time had rewarded me, but had no chance with the programming. Now I was ready to fall into a lower gear and program a bit more to make sure a result.

    I settled on a design based around a little assembly in which I soldered one pogo snare to a length of hookup wire, then coated the joint at insulation. On the opposite end of the cable I attached a Dupont connector. Making 16 of these small assemblies took approximately 2.5 hours, but in the end, when I analyzed them for continuity in the spring-loaded tip to the end of a cable attached to the Dupont connector, so I was confident that I had the building blocks of a trusted system.

    Pogo trap, cable, and Dupont connector

    A experiments started to suggest the best-yet layout. It would use a cube which fit with holes directing the pogo pins down just onto the pads, around the ESP-8266 processor. Surrounding that are a pair of plates so that the system, relay and all, could be held in place and the hooks clamped down to make sure a fantastic connection.

    Clamp guarantees a fantastic link, mid screws press down guide and prevent breaking hooks by overtightening.

    I was ready for a test. Previously, I had used the pad for a test system. I understood that shorting the pad to earth should have left the board reset, that seemed like a flases of the gloomy LED accompanied by a pulse. But now, when I grounded the pin, a red LED came on, and none of the ones did. From trial and error, I realized that if I earth through startup and did not make the link between GPIO 16, the regular reset could happen. However, if GPIO 16 was attached to earth as instructed, just the light came on. I understood that this was good news as soon as I return in the instructions. The main reason that GPIO 16 is likely to be held low is that the processor is put by booting in that configuration . What I saw was that the processor was ready to be programmed.

    I hooked up the bed of nails to the processor . I hooked up the cables . I hooked up the programmer to the computer. I had a little test step that “blinked” GPIO 12– that the relay controller pad–onto a two-second cycle. I clicked to program the plank, and that I almost couldn’t believe it when the upload finished and proceeded. The red LED which reflected the state was flashing off and on, as soon as I looked over in the board. I put a light into it, plugged in the relay, and put the plastic cover back on. Click. The light went on. Click. The light went off. Click. Click. Click. I viewed it. I then purchased a couple relays :–RRB-.

    Success after 3 weeks

    (0) This instructable for a bed of claws for a 3D printer circuit board illustrates the principle nicely.

    A Bright Future in the Computer Industry with Quantum Computers

    A computer is much more than a gadget for running an online chat or sending mails. This gadget includes programs for carrying out tasks. A conventional computer might not be effective in the future due to the quantity of data existing in data banks. However, the information processing sector will be benefited by an introduction to quantum computing.

    The Way Quantum Computing Works

    A quantum computer uses qubits rather than bits like traditional computers. Because of this, they can store one or zero particles (or equally). Quantum computers store and simultaneously. Further, they work in a type allowing many tasks to be performed by them at once. This type makes information processing quicker than conventional computers.

    The qubits are stored by atoms. These atoms demand a special mechanism for containing them in a state where they can save conduct and data data. Normally, fields, laser beams, and radio waves contain them.

    Quantum computing has a special and interesting . The traditional computers work on two binary numbers: 0 and 1’s. All the tasks undertaken on a computer including online or computing chats are translated as 0’s and 1’s and then algorithmically processed.

    The Advantages of Quantum Computing

    Data banks have been gradually reaching their limitation in processing information. Users quickly transmit data from taking pictures and videos . Conventional computers are on the brink of reaching the limitation of information processing. As a result, major companies in the computer sector are planning to mitigate this dilemma. They anticipate the which could handle all of the larger computer processing tasks. Apart from what we have previously discussed, quantum computers have the following benefits:

    Optimization of Solutions

    Quantum computers will sample information and maximize the problem encountered during the evaluation process. They can assist in determining the best treatment for each problem. These computers give consumers and businesses the opportunity to produce conclusions and search investments in businesses offering this technology.

    Solving Complex Problems

    Quantum computers are fast and reliable. Computers have trouble with solving complicated problems and saving information. Fortunately, problems are solved by quantum computers in seconds. There is an assumption which quantum computers using artificial intelligence machine learning’s integration can raise the quantum computing technology even further, enabling more complicated problems to be addressed computers.

    Integration of Data in Different Data Sets

    Quantum computers will boost a quick analysis and integration of large collections of information. As a result, there will be an improvement in skills of machine learning and artificial intelligence. The quantum computers anticipate breakthroughs after launching. This is due to the different data collections existing. Even so, human intervention will be required to help out with training the computer. As an example, a computer needs to comprehend the link between special schemes which may be present in distinct information sources.

    Identifying Design in Big Data Sets

    Data might have anomalies through transmission. Quantum computers have been anticipated to recognizing these patterns immediately. It’s intriguing how these computers can multi-task. For instance, assessing a computer and spotting the routine . But this process might take some time. This is only because it will need.

    Overall, the addition of quantum computers indicate a promising future in the computer world. This technology will enhance businesses. Analysis and information processing would be more easy, and in the event of a problem, quantum computers have been designed to address all sorts of complexities. Furthermore, these computers can guarantee there is precision in data processing. Our ability to rely on computed information will amplify astronomically as we improve the usage of quantum computers.