easy_pass : For easy storage of passwords

How are passwords stored?

A password should never be stored in plaintext. So when your logging into your google account or any other reliable service it doesn’t look at the password you entered and validate it by matching it up with your password it has stored in plaintext in some server.

It does something a little more complicated : when you make your account it runs your password through a algorithm called a hash algorithm which gives a “hashed” version of your password which looks something like this :

39e565156f3ec687d71c12c5cfd2f8de3e1e862124743e9c4a1a2a7dc605e88e

So whenever the same password is “hashed” the same hash will come out but with even one change, the hash will change entirely. So when you want to log in to your account it hashes the password you entered and sees if the hash of the password you entered matches the hash they have stored in their database. And it is near impossible to reverse a hash to get the original text from the hash.

This is more preferable than just storing a password in plaintext as if your database is breached, then the passwords will be safe.

So I made a python library to manage hashes and passwords which I will be publishing in some time. The library will be used for hashing passwords and validating those hashes with the real password. Stay Tuned!

Making my own activation function

This is my first blog post and its going to be about my activation function.

I made a activation function and its called Swish 2.0, its a improvement of the swish activation function. A video detailing what a neural network and a activation function is can be found here : https://youtu.be/bfmFfD2RIcg

A activation function is a important part of a neural network as it introduces non-linearity into the model and makes it more accurate. Various activation functions exist here is a list of some of them : https://en.m.wikipedia.org/wiki/Activation_function

So I experimented with activation functions and made my own one. It out performs a activation function called Swish which was made by Google.

A graph showing the accuracy of Swish 2.0(red) compared to Swish(blue)

So as you can see in the graph above the red line rises over the blue one so my activation function is better.

My activation function is extremely similar to Swish but outperformed Swish.

Activation functions can also be graphed, here is a graph of swish and my activation function :

The blue part of the graph above is swish and the red part is my activation function. As you can see on the left part of the graph the blue line and the red line align.

That means that swish and my activation function have about the same output for the numbers between -5.0 and 0.

But towards the right part of the screen the blue line and the red line diverge, meaning they begin to have different outputs

Swish 2.0 (my activation function) could improve the performance of neural networks across the globe.

By, Sahal Mulki