Jump to Pyfolio section for this tutorial!
from quantopian.interactive.data.sentdex import sentiment
Above, we're bringing in the Sentdex sentiment dataset. The sentiment dataset provides sentiment data for companies from ~June 2013 onward for about 500 companies, and is free to use on Quantopian up to a rolling 1 month ago. The Sentdex data provides a signal ranging from -3 to positive 6, where positive 6 is equally as positive as -3 is negative, I just personally found it more necessary to have granularity on the positive side of the scale.
We will also import the Q1500, which is Quantopian's sort of "index" that tracks 1500 of the most liquid companies that make the most sense for trading. The idea here is that, in order to properly back-test, you're assuming your shares will actually move at a fair pace. They might take a minute to fill, but we're not expecting them to take days. The Q1500 is a nightly updated list of acceptable companies that we can rely on to be liquid.
from quantopian.pipeline.filters.morningstar import Q1500US
type(sentiment)
Note that the datasets you import in the Research section are Blaze expressions. More info: https://blaze.readthedocs.io/en/latest/
We can see the attributes:
dir(sentiment)
Blaze abstracts out computation and storage, aiming to give you faster speeds. From what I've seen blaze is about 4-6x faster than your typical pandas dataframe. Considering the sizes of the dataframes we're using here and the compute times, this is a great improvement, we'll take it. As far as we're concerned, however, we're mostly going to just treat this like a pandas dataframe. For example:
BAC = symbols('BAC').sid
bac_sentiment = sentiment[ (sentiment.sid==BAC) ]
bac_sentiment.head()
While .head() is going to still work, .peek() is blaze, and quicker
bac_sentiment.peek()
In most cases, you're going to just run some computations in the form of filters and factors, but, if you did want to do some pandas-specific things on this data, you would first need to convert it back to a dataframe. For example, if you wanted to utilize the .plot attribute that a dataframe has, you would need to do this:
import blaze
bac_sentiment = blaze.compute(bac_sentiment)
type(bac_sentiment)
bac_sentiment.set_index('asof_date', inplace=True)
bac_sentiment['sentiment_signal'].plot()
The sentiment signals are generated by moving average crossovers generated straight from raw sentiment. Initially, those moving averages are going to be quite wild, so you wouldn't want to use the earliest data. For example:
bac_sentiment = bac_sentiment[ (bac_sentiment.index > '2016-06-01') ]
bac_sentiment['sentiment_signal'].plot()
The idea behind the pipeline is to allow you to quickly and efficiently consider many thousands of companies (~8,000 total on Quantopian).
The challenge that Pipeline overcomes for you is that, in a typical strategy, you might want to compute a function, or maybe check for some fundamental factor, but you want to do this against all companies, not just some arbitrarily limited group of companies. Pipeline allows you to address all companies, then filter them.
We will start with a simple example:
from quantopian.pipeline import Pipeline
def make_pipeline():
return Pipeline()
A pipeline object is created with our make_pipeline() function, but currently we're doing nothing here, so we've not yet filtered any companies, and this pipeline will have every company inside of it.
To actually run a pipeline, we need to import run_pipeline. It's important to note that this is different in the research environment than in an algorithm, as a few of your imports will be. To bring in the run_pipeline function for research:
from quantopian.research import run_pipeline
my_pipe = make_pipeline()
result = run_pipeline(my_pipe, start_date='2015-05-05', end_date='2015-05-05')
In this case, our result is just for a single day. The more days you consider, the longer this process will take, so, while we're just learning, we'll keep it short. Result is a Pandas dataframe, so we can do all sorts of actions against it. For now, it's actually a pretty boring one:
result.head()
len(result)
We can also see that we've not reduced our universe of companies of interest at all! Let's modify our pipeline function to fix this!
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
from quantopian.pipeline.filters.morningstar import Q1500US
from quantopian.pipeline.data.sentdex import sentiment
def make_pipeline():
#Factor returns
sentiment_factor = sentiment.sentiment_signal.latest
# Our universe is made up of stocks that have a non-null sentiment signal that was updated in
# the last day, are not within 2 days of an earnings announcement, are not announced acquisition
# targets, and are in the Q1500US.
universe = (Q1500US()
& sentiment_factor.notnull())
# A classifier to separate the stocks into quantiles based on sentiment rank.
# Go short the stocks in the 0th quantile, and long the stocks in the 2nd quantile.
pipe = Pipeline(
columns={
'sentiment': sentiment_factor,
'longs': (sentiment_factor >=4),
'shorts': (sentiment_factor<=2),
},
screen=universe
)
return pipe
result = run_pipeline(make_pipeline(), start_date='2015-01-01', end_date='2016-01-01')
result.head()
What Alphalens aims to do for us is to help us analyze alpha factors over time. The point here is to hopefully highlight where your alpha factor shines, and where it doesn't, effectively saving you a lot of time from running and re-running backtests to try to diagnose issues with your strategy's thesis.
The pipeline returns to us basically whatever we wanted from it. In the end, usually, this will be data you want to use in trading. In trading, our pipeline is lined up with pricing data over time, and trades are executed in this environment. With alphalens, we want to grab pricing data for the securities that we're interested in, then we compare our trading signals/trades with price over time to analyze Alpha factors in a variety of ways.
So, now, let's grab those prices:
assets = result.index.levels[1].unique()
pricing = get_pricing(assets, start_date='2014-12-01', end_date='2016-02-01', fields='open_price')
Notice that we're adding one month to the beginning and end of our prices' end date and start date, we're doing this so we have some more 'future' historical to compute against, as well as leading pricing data for leading up to our signal.
Now we're going to run alphalens. The factor is the "signal" that we're hoping is an Alpha Factor, quantiles are groups that you want to sort your signal into. Here, we have 2 groups, so they are defacto "bad" and "good" groups. To work correctly at the moment, your factor needs to range from "bad" to "good" in its signal. Periods are periods forward. In our case, we're using 1,5,10 for 1 day, 5 days, and 10 days forward to calculate forward returns with. For an explanation of everything here, see the video
import alphalens
alphalens.tears.create_factor_tear_sheet(factor=result['sentiment'],
prices=pricing,
quantiles=2,
periods=(1,5,10))
from quantopian.pipeline import Pipeline from quantopian.algorithm import attach_pipeline, pipeline_output from quantopian.pipeline.filters.morningstar import Q1500US from quantopian.pipeline.data.sentdex import sentiment def initialize(context): """ Called once at the start of the algorithm. """ # Rebalance every day, 1 hour after market open. schedule_function(my_rebalance, date_rules.every_day(), time_rules.market_open(hours=1)) # Record tracking variables at the end of each day. schedule_function(my_record_vars, date_rules.every_day(), time_rules.market_close()) # Create our dynamic stock selector. attach_pipeline(make_pipeline(), 'my_pipeline') set_commission(commission.PerTrade(cost=0.001)) def make_pipeline(): # 5-day sentiment moving average factor. sentiment_factor = sentiment.sentiment_signal.latest # Our universe is made up of stocks that have a non-null sentiment signal and are in the Q1500US. universe = (Q1500US() & sentiment_factor.notnull()) # A classifier to separate the stocks into quantiles based on sentiment rank. sentiment_quantiles = sentiment_factor.rank(mask=universe, method='average').quantiles(2) # Go short the stocks in the 0th quantile, and long the stocks in the 2nd quantile. pipe = Pipeline( columns={ 'sentiment': sentiment_quantiles, 'longs': (sentiment_factor >=4), 'shorts': (sentiment_factor<=2), }, screen=universe ) return pipe def before_trading_start(context, data): try: """ Called every day before market open. """ context.output = pipeline_output('my_pipeline') # These are the securities that we are interested in trading each day. context.security_list = context.output.index.tolist() except Exception as e: print(str(e)) def my_rebalance(context,data): """ Place orders according to our schedule_function() timing. """ # Compute our portfolio weights. long_secs = context.output[context.output['longs']].index long_weight = 0.5 / len(long_secs) short_secs = context.output[context.output['shorts']].index short_weight = -0.5 / len(short_secs) # Open our long positions. for security in long_secs: if data.can_trade(security): order_target_percent(security, long_weight) # Open our short positions. for security in short_secs: if data.can_trade(security): order_target_percent(security, short_weight) # Close positions that are no longer in our pipeline. for security in context.portfolio.positions: if data.can_trade(security) and security not in long_secs and security not in short_secs: order_target_percent(security, 0) def my_record_vars(context, data): """ Plot variables at the end of each day. """ long_count = 0 short_count = 0 for position in context.portfolio.positions.itervalues(): if position.amount > 0: long_count += 1 if position.amount < 0: short_count += 1 # Plot the counts record(num_long=long_count, num_short=short_count, leverage=context.account.leverage)
Pyfolio is meant to analyze risk and performance of a backtest
bt = get_backtest('5883f1c6908a93476cf40baa')
bt.create_full_tear_sheet()