the reference answer GPT ᴼ ᴾ ᴱ ᴺ ᴬ ᴵ < br / > here are the modified Perceptron class code, the class to receive four input signals:
class Perceptron:
def __init__(self):
self.weights = [0.0, 0.0, 0.0, 0.0]
self.bias = 0.0
def predict(self, inputs):
activation = self.bias
for i in range(len(inputs)):
activation += inputs[i] * self.weights[i]
if activation >= 0.0:
return 1.0
else:
return -1.0
def train(self, training_inputs, labels, learning_rate, num_epochs):
for epoch in range(num_epochs):
for i in range(len(training_inputs)):
prediction = self.predict(training_inputs[i])
error = labels[i] - prediction
self.bias += learning_rate * error
for j in range(len(self.weights)):
self.weights[j] += learning_rate * error * training_inputs[i][j]
In the init function, we need to change the length of the weight array from 2 to 4 and initialize all its values to 0.0. The bias variable is still initialized to 0.0.
In the predict function we need to change the cycle from 2 to 4 and sum the first 4 elements of the inputs array by multiplying them against the weighted array to get the inputs active. If the activation value is greater than or equal to 0.0, then 1.0 is returned, otherwise -1.0 is returned.
In the train function, we need to adjust the weight and input length accordingly. Specifically, the length of the weight should be 4, and the input vector of each training sample should contain 4 elements. In the inner loop, we need to multiply the first 4 elements of the input vector with the weight array, and then update the weights and deviations in the same way as in the original version.