• About
  • Advertise
  • Privacy & Policy
  • Contact
Tech News, Magazine & Review WordPress Theme 2017
  • Home
  • Review
  • Gaming

    My Hero Academia’s battle royale beta is coming soon and will be free to play

    FIFA 23 Web App: Everything you need to know

    Summer Games Done Quick does more than $3M for charity

    Up To $880 Flat OFF On Gaming PCs and Laptops!

    Crash 4 on PS Plus, The Boys Cut Comic Scene, & More | IGN The Weekly Fix – IGN

    Nvidia wants to scale back RTX 4000 GPU chips, but can’t

  • Gear

    Trending Tags

    • Best iPhone 7 deals
    • Apple Watch 2
    • Nintendo Switch
    • CES 2017
    • Playstation 4 Pro
    • iOS 10
    • iPhone 7
    • Sillicon Valley
  • Computers
  • Applications
  • Security
No Result
View All Result
  • Home
  • Review
  • Gaming

    My Hero Academia’s battle royale beta is coming soon and will be free to play

    FIFA 23 Web App: Everything you need to know

    Summer Games Done Quick does more than $3M for charity

    Up To $880 Flat OFF On Gaming PCs and Laptops!

    Crash 4 on PS Plus, The Boys Cut Comic Scene, & More | IGN The Weekly Fix – IGN

    Nvidia wants to scale back RTX 4000 GPU chips, but can’t

  • Gear
    • All
    • Audio
    • Camera
    • Laptop
    • Smartphone

    Trending Tags

    • Best iPhone 7 deals
    • Apple Watch 2
    • Nintendo Switch
    • CES 2017
    • Playstation 4 Pro
    • iOS 10
    • iPhone 7
    • Sillicon Valley
  • Computers
  • Applications
  • Security
No Result
View All Result
buy at cheapest deals and offers
No Result
View All Result

How To Confuse a Neural Network Using Fast Gradient Sign Method?

admin by admin
August 1, 2021
Home Uncategorized
Share on FacebookShare on Twitter



Many machine learning models, together with neural networks, persistently misclassify the adversarial examples. Adversarial examples are nothing however specialised inputs created to confuse neural networks, finally leading to misclassification of the consequence. These infamous inputs are virtually the identical as the unique picture to human eyes however trigger a neural community to fail to establish the picture’s content material. Such inputs fashioned by making use of small however deliberately worst case perturbation to the instance from the dataset such that perturbed enter leads to the mannequin placing an incorrect reply that too with excessive confidence.     

Classifiers primarily based on trendy machine studying strategies which obtained increased efficiency on take a look at knowledge should not studying the true underlying idea that determines the correctness of output labels. As a substitute, these algorithms are spectacular on naturally occurring knowledge, however being uncovered to faux knowledge or tempered offers a excessive likelihood on different labels. That is notably disappointing as a result of a well-liked strategy in computer vision is to make use of convolutional community options as house the place Euclidean distance approximates perceptual distance. Nevertheless, this resemblance is flawed if photos with immeasurably small perceptual distances correspond to totally different community illustration lessons.  

REGISTER>>

There are a number of varieties of assaults, however the focus is on the Quick Gradient Signal Technique assault, which is the whitebox assault. Whitebox assaults are these the place the hackers or attackers have full entry to the mannequin being attacked.

The above instance is the most typical instance used to elucidate the tactic; right here, initially, the mannequin predicts the picture as a panda with first rate confidence now the attackers have launched a perturbed picture to the unique one, which mannequin outcomes misclassify it as a gibbon that has too very excessive confidence. The strategy used to introduce disturbance is FGSM, and we’re going to focus on this technique. 

How Quick Gradient Signal Technique works?

The FGSM makes full use of gradients of a neural community to construct an adversarial picture; it computes the gradients of the loss function, e.g. MSE or Cross-entropy, to the enter picture after which makes use of the signal of that gradient to create a brand new adversarial picture.

Gradients are taken w.r.to enter photos as a result of the target is to create the picture which maximizes the loss. That is completed by discovering how a lot every pixel contributes to the loss worth, and strategies add perturbation accordingly. This won’t have an effect on any change within the mannequin’s parameter as it’s already educated; we have now taken solely states of the gradient.    

Briefly, the tactic works within the following steps:

  • Takes a picture
  • Predicts picture utilizing CNN community
  • Computes the loss on prediction in opposition to true label
  • Calculates gradients of the loss w.r.to enter picture 
  • Computes the signal of the gradient 
  • Utilizing signal generates a brand new picture 

Let’s implement this technique. To clarify this technique, we have now used the official code of this technique from Tensorflow.

Implementation of FGSM in Python

Import all dependencies:
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib as mpl

# outline the figures measurement
mpl.rcParams['figure.figsize'] = (7,7)
Load the pretrained mannequin:

Right here we’re utilizing MobileNetV2 on the imagenet dataset;

pre_trained_model = tf.keras.purposes.MobileNetV2(
    include_top = True, weights="imagenet")
pre_trained_model.trainable = False
# decoding the prediction
decode_prediction = tf.keras.purposes.mobilenet_v2.decode_predictions
Helper Features:

Under two helper features are used to course of the enter picture in order that it may be dealt with by our mannequin and one other one to extract predicted labels;

def course of(picture):
  picture = tf.solid(picture, tf.float32)
  picture = tf.picture.resize(picture, (224,224))
  picture = tf.keras.purposes.mobilenet_v2.preprocess_input(picture)
  picture = picture[None, ...]
  return picture
def imagenet_label(probs):
  return decode_prediction(probs, prime=1)[0][0]
Load the picture:

Right here we are going to load the picture from the net and test the prediction; you possibly can strive the picture from a neighborhood machine additionally;

image_path = tf.keras.utils.get_file('Labrador_on_Quantock_percent282175262184percent29.jpg','https://add.wikimedia.org/wikipedia/commons/thumb/3/34/Labrador_on_Quantock_percent282175262184percent29.jpg/1200px-Labrador_on_Quantock_percent282175262184percent29.jpg')
image_raw = tf.io.read_file(image_path)
picture = tf.picture.decode_image(image_raw)
image_processed = course of(picture)
img_probs = pre_trained_model.predict(image_processed)
plt.imshow(image_processed[0] * 0.5 + 0.5)
_, image_class, class_confidence = imagenet_label(img_probs)
plt.title('{} : {:.2f}% Confidence'.format(image_class, class_confidence*100))
plt.axis('off')
plt.present()
Create an Adversarial picture:

As defined above, right here, extract the gradient of the anticipated picture to create perturbation which will likely be used to distort the unique picture. 

See Additionally

Senthil Ramani, AAI
loss = tf.keras.losses.CategoricalCrossentropy()
def adv_pattern(input_image, input_label):
  with tf.GradientTape() as tape:
    tape.watch(input_image)
    prediction = pre_trained_model(input_image)
    loss_ = loss(input_label, prediction)

    # Get the gradients of the loss w.r.to enter picture
    gradient = tape.gradient(loss_, input_image)
    # Get signal of gradient to create perturbation
    signed_grad = tf.signal(gradient)
    return signed_grad

Lets see the generated adversarial picture;

labrador_retriever_index = 208
label = tf.one_hot(labrador_retriever_index,img_probs.form[-1])
label = tf.reshape(label, (1, img_probs.form[-1]))
perturbations = adv_pattern(image_processed, label)
plt.imshow(perturbations[0] * 0.5 + 0.5)
Complicated the community:

Utilizing extra values of epsilons (a worth to set the extent of perturbation) will assist to look at the impact; later, you will notice as we enhance the values, the community tends to misclassify rapidly. 

def display_images(picture, description):
  _, label, confidence = imagenet_label(pre_trained_model.predict(picture))
  plt.determine()
  plt.imshow(picture[0]*0.5+0.5)
  plt.title('{} n {} : {:.2f}% Confidence'.format(description,
              label, confidence*100))
  plt.axis('off')
  plt.present()

epsilons = [0, 0.01, 0.1, 0.15]
descriptions = [('Epsilon = {:0.3f}'.format(eps) if eps else 'Input')
                for eps in epsilons]

for i, eps in enumerate(epsilons):
  adv_x = image_processed + eps*perturbations
  adv_x = tf.clip_by_value(adv_x, -1, 1)
  display_images(adv_x, descriptions[i])

Under are the outcomes for various values of epsilons:

Conclusion

The commonest defence consists of coaching your community Adversarial photos of every class generated utilizing the goal mannequin. This improves the general generalization of the mannequin however doesn’t present significant robustness to the mannequin. On this case, we are able to use totally different defensive strategies akin to guided denoiser or defensive distillations to get true robustness. 

From this text, we have now seen that FGSM can be utilized to idiot the community. We additionally do that technique with totally different fashions and structure. The primary deciding issue is the epsilon worth. By it, we are able to have a tradeoff between classification.   

References


Be a part of Our Telegram Group. Be a part of an interesting on-line group. Join Here.

Subscribe to our Publication

Get the most recent updates and related affords by sharing your e mail.





Source link

admin

admin

Next Post
Secret iPhone hardware upgrade Apple doesn’t want you to have

Secret iPhone hardware upgrade Apple doesn't want you to have

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended.

Apple AirPods 3 Tipped to Launch Alongside iPhone 13 Series At Fall Event

Apple AirPods 3 Tipped to Launch Alongside iPhone 13 Series At Fall Event

August 2, 2021
2pcs Coffee Maker Trays, Kitchen Caddy Sliding Coffee Tray Mat, 12” Under Cabinet Appliance Coffee Maker Toaster Countertop Storage Moving Slider with Smooth Rolling Wheels

2pcs Coffee Maker Trays, Kitchen Caddy Sliding Coffee Tray Mat, 12” Under Cabinet Appliance Coffee Maker Toaster Countertop Storage Moving Slider with Smooth Rolling Wheels

July 30, 2021

Trending.

SOONDAR 2Pcs Car Interior Anti Slip Cup Mat for BMW 1 3 5 7 Series F30 F35 320li 316i X1 X3 X4 X5 X6 (2.75″)

SOONDAR 2Pcs Car Interior Anti Slip Cup Mat for BMW 1 3 5 7 Series F30 F35 320li 316i X1 X3 X4 X5 X6 (2.75″)

July 23, 2021
DASH Everyday Nonstick Deluxe Electric Griddle with Removable Cooking Plate for Pancakes, Burgers, Quesadillas, Eggs and Other Snacks, Includes Drip Tray + Recipe Book, 20” x 10.5”, 1500-Watt, Grey

DASH Everyday Nonstick Deluxe Electric Griddle with Removable Cooking Plate for Pancakes, Burgers, Quesadillas, Eggs and Other Snacks, Includes Drip Tray + Recipe Book, 20” x 10.5”, 1500-Watt, Grey

July 7, 2021
Planet of the Discounts promotion comes to PlayStation Store – PlayStation.Blog

Planet of the Discounts promotion comes to PlayStation Store – PlayStation.Blog

July 7, 2021
VViViD Brushed Stainless Steel Natural Texture Architectural Adhesive Vinyl 48 Inches by 3 Foot Roll

VViViD Brushed Stainless Steel Natural Texture Architectural Adhesive Vinyl 48 Inches by 3 Foot Roll

July 14, 2021

Father’s day 2022 gift ideas: From Sonos speaker to Apple Watch, check out 5 great gadgets to give away

June 16, 2022
buy at cheapest deals and offers

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Follow Us

Categories

  • Apple
  • Appliances
  • Arts, Crafts, & Sewing
  • ClickBank
  • Gaming
  • Home and Kitchen
  • Uncategorized

Recent News

Apple to release new ‘Lockdown Mode’. Here’s all we know so far

July 6, 2022

Edge intelligence | UDaily

July 6, 2022
  • About
  • Advertise
  • Privacy & Policy
  • Contact

© 2022 - " bestdealscorner.com=".

No Result
View All Result
  • Home
  • Review
  • Apple
  • Applications
  • Computers
  • Gaming
  • Gear
    • Audio
    • Camera
    • Smartphone
  • Microsoft
  • Photography
  • Security

© 2022 - " bestdealscorner.com=".