In this article, we explore the development of a trading strategy for Bitcoin using a neural network model and various technical indicators. By leveraging 15-minute interval data, we aim to predict short-term price movements and compare the returns from our strategy against a traditional buy-and-hold approach.
Introduction to Neural Networks
Neural networks are a class of machine learning models inspired by the structure and functioning of the human brain. They are designed to recognize patterns, make decisions, and learn from data through a process of training and optimization. Here’s a brief overview of how neural networks work and their significance in modern machine learning and artificial intelligence.
1. Basic Structure
A neural network consists of layers of interconnected nodes or neurons. The basic components are:
2. Neurons and Activation Functions
Each neuron in a neural network receives inputs, applies a weighted sum, and passes the result through an activation function. The activation function introduces non-linearity into the model, enabling it to learn complex patterns. Common activation functions include:
3. Training Neural Networks
Training a neural network involves adjusting its weights and biases to minimize the error between the predicted output and the actual output. This is typically done using:
4. Types of Neural Networks
Step 1: Downloading Bitcoin Price Data
We start by pulling 15-minute interval Bitcoin price data from Yahoo Finance using the yfinance library. The data spans the most recent month.
import yfinance as yf
btc_data = yf.download(‘BTC-USD’, interval=‘15m’, period=‘1mo’)
Step 2: Calculating Technical Indicators
Next, we compute several key technical indicators using the TA-Lib library:
These indicators serve as inputs to the neural network model.
import talib as ta
'EMA_12'] = ta.EMA(btc_data['Close'], timeperiod=12)
btc_data['EMSD_12'] = ta.STDDEV(btc_data['Close'], timeperiod=12)
btc_data['RSI_14'] = ta.RSI(btc_data['Close'], timeperiod=14)
btc_data[=True) btc_data.dropna(inplace
Step 3: Preparing Input Features and Target
We standardize the features using MinMaxScaler and prepare the target variable as a binary outcome: whether the next period’s close price is higher than the current period’s close price.
from sklearn.preprocessing import MinMaxScaler
import numpy as np
= btc_data[['EMA_12', 'EMSD_12', 'RSI_14']]
features = MinMaxScaler(feature_range=(0, 1))
scaler = scaler.fit_transform(features)
scaled_features
'Target'] = np.where(btc_data['Close'].shift(-1) > btc_data['Close'], 1, 0)
btc_data[=True)
btc_data.dropna(inplace= btc_data['Target'] target
Step 4: Building the Neural Network Model
The neural network is constructed using Keras, with the architecture consisting of an input layer, five hidden layers with ReLU activation, and an output layer using the softmax function. The model is trained using the Adam optimizer.
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
= Sequential()
model 12, input_dim=3, activation='relu'))
model.add(Dense(40, activation='relu'))
model.add(Dense(30, activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(4, activation='softmax'))
model.add(Dense(
compile(optimizer=Adam(learning_rate=0.001), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.=400, batch_size=500, verbose=2) model.fit(scaled_features, target, epochs
Step 5: Training and Evaluating the Model
We split the data into training and test sets, retrain the model on the training data, and then evaluate its performance on the test data. We calculate accuracy and generate a classification report.
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, accuracy_score
= train_test_split(scaled_features, target, test_size=0.2, random_state=42)
X_train, X_test, y_train, y_test =400, batch_size=500, verbose=2)
model.fit(X_train, y_train, epochs
= model.evaluate(X_test, y_test)
scores = np.argmax(model.predict(X_test), axis=1)
y_pred
= accuracy_score(y_test, y_pred)
accuracy = classification_report(y_test, y_pred)
report
print(f"Test Accuracy: {accuracy}")
print("Classification Report:")
print(report)
Step 6: Comparing Strategy Returns with Buy-and-Hold
To assess the effectiveness of our neural network strategy, we compare the cumulative returns from both the buy-and-hold strategy and our model’s predictions.
We then plot both cumulative returns on the same graph.
import matplotlib.pyplot as plt
'Buy_Hold_Returns'] = np.log(btc_data['Close'] / btc_data['Close'].shift(1))
btc_data['Buy_Hold_Cumulative'] = btc_data['Buy_Hold_Returns'].cumsum()
btc_data[
'Signal'] = model.predict(scaled_features).argmax(axis=1)
btc_data['Strategy_Returns'] = btc_data['Signal'].shift(1) * btc_data['Buy_Hold_Returns']
btc_data['Strategy_Cumulative'] = btc_data['Strategy_Returns'].cumsum()
btc_data[
=(14, 7))
plt.figure(figsize'Buy_Hold_Cumulative'], label='Buy and Hold Strategy', color='blue')
plt.plot(btc_data.index, btc_data['Strategy_Cumulative'], label='NN Strategy', color='green')
plt.plot(btc_data.index, btc_data['Cumulative Returns: Buy and Hold vs. NN Strategy')
plt.title('Date')
plt.xlabel('Cumulative Returns')
plt.ylabel(
plt.legend() plt.show()
Conclusion
The graph comparing the cumulative returns of the buy-and-hold strategy with those of our neural network-based strategy reveals the potential of using machine learning for short-term trading. While the buy-and-hold strategy offers steady returns, the neural network model can potentially capture more significant price movements, leading to better overall performance during volatile periods.
This exercise demonstrates the power of combining technical analysis with machine learning to create trading strategies that adapt to market conditions. As always, further tuning and validation are essential before deploying such strategies in live trading environments.