Wednesday 28 March 2018

Quantstrat forex trading


Bollinger Bands와의 거래 R. Bollinger Bands는 변동성이 높은시기에는 변동폭이 큰시기에는 변동폭이 큰 반면 변동성이 클 때는 서로 가깝게 이동하므로 기본적으로 시장의 변동성 및 변동성에 맞게 조정됩니다. 이 도구의 진정한 가치입니다. 거래 기회에서 우리가 찾는 두 가지 조건이 있습니다. 시장이 상승 추세에있을 때를 돕기 위해 철수를 사고 싶습니다. 시장이 하락세에있을 때 저항으로 상승 할 것입니다. Bollinger Bands는 일반적으로 우리의 무역 체제에 대한 좋은 저항과지지를 제공하기 때문에 우리는 강한 트렌드 쌍을 따르고 있는지 확인해야합니다. 이 USDCHF 일일 차트의 예를 살펴보십시오. FXCM Marketscope 차트 2에 의해 작성되었습니다. 구매 기회를 얻기 위해 저 대역을 지원하기 위해 딥 (dip down)을 찾는 것을 의미하는 일련의 높은 최고치와 최저 최저치를 볼 수 있기 때문에 올라간다. 나는 5 월에 첫 번째 차트에서 언급 된 두 가지 예와 두 번째 예가있다. 올해 6 월에 시장이 일어났다. 시장은 박스에 기록 된 각각의 사례에서 하단 볼린 거 밴드 (Bollinger Band)로 거래되었다. 그러나 이것은 반드시 매수 그 자체가 아니라 반전으로 매수를 찾기 시작하는 신호이다. 거래자들은 시장이 이전의 높은 가격으로 올라감에 따라 자신이 선호하는 지표를 사용하는 것에서부터 구매하는 것에 이르기까지 다양한 방법으로 항목을 결정할 것입니다. 인기있는 접근법은 중간 선 위에 20 일 동안 마감되는 첫 번째 촛불 구매입니다 단순 이동 평균 이것은 거래의 반전을 확인하고 거래 성공 가능성을 높입니다. 위의 차트에서 구매 촛불은 녹색 화살표로 표시됩니다. 거래자는 상자 안의 가장 낮은 심지 아래에서 보호 중지를 배치 할 수 있습니다. 1 2 위험 보상 비율에 대한 수익에서 두 배의 위험에 대해. USDCHF의 가격 조치가 지난 며칠 동안 아래로 내려 갔고 하단 Bollinger Band를 4 번 터치 한 것을 언급하고자합니다. 이것은 우리가 또 다른 구매 기회를 찾으려하고 있습니다. 하지만 지금 당장 구매하는 것이 아니라, 거래를 성공 확률을 높이기 위해 구매 항목을 정확히 찾아내는 방법이 될 것입니다. 지난 주에 인내심과 규율을 행사하고 20 일간의 단순 이동 평균 이상으로 첫 번째 닫기를 기다리는 것은 방금 배운 Bollinger Band 전략을 사용하여이 거래에 들어갈 수있는 방법이 될 것입니다. 외환 시장에 새로운 것을 알아 내기위한 시간 절약 FOREX 거래에 관한 모든 것입니다 .20 분 무료입니다. DailyFX Education에서 제공하는 FX 과정에 익숙하지 않습니다. 이 과정에서 FOREX 거래의 기초, 레버리지는 무엇인지, 그리고 귀하의 적절한 레버리지를 결정하는 방법에 대해 배우게됩니다. Forex 거래 now. DailyFX 시작하려면 여기 rregiste 글로벌 통화 시장에 영향을 미치는 추세에 forex 뉴스 및 기술 분석을 제공합니다. 이 게시물은 존 Ehlers의 자기 상관 P를 소개합니다 eriodogram 메커니즘 동적으로 룩백 기간을 찾기 위해 설계된 메커니즘 즉, 백 테스트에서 최적화 된 가장 일반적인 매개 변수는 룩백 기간입니다. 이 게시물을 시작하기 전에 필자는 Fabrizio Maccallini의 파생 된 파생 상품 책임자 런던의 Nordea Markets에서 나머지 저장소를 찾을 수 있습니다. John Ehlers 박사의 트레이더를위한 Cycle Analytics for his github 저는 지능 있고 경험 많은 개인이 Dr. Ehlers의 일부 메소드를 R로 가져 오는 데 도움을 준 것을 감사하고 영광으로 생각합니다. Ehlers Autocorrelation Periodogram의 요점은 최소 및 최대 기간 길이 사이의 기간을 동적으로 설정하는 것입니다. Ehlers 박사의 책에 대한 정비사의 정확한 설명을 남기는 동안, 모든 실질적인 의도와 목적을 위해, 내 의견으로는, 이 방법은 거래 시스템 생성에서 지나치게 큰 초과 출처를 제거하는 것, 즉 되돌림 기간을 지정하는 것입니다. 50 일 100 일 200 일 W 이 알고리즘은 손에서 초과 적용 할 가능성이 있습니다. 룩백의 상한선과 하한선을 지정하면됩니다. 나머지는 수행합니다. 전기 방법론에 정통한 사람들이 얼마나 잘 논의 할 수 있습니까? 엔지니어링 저는 알고리즘이 얼마나 잘 작동하는지에 대한 논평을 자유롭게하고 블로그에 대해서도 자유롭게 이야기합니다. 어쨌든, 원래의 알고리즘 코드는 Maccallini의 호의입니다. 이 코드가 1 길이의 filt 인 O 데이터 포인트 루프에 대해 말해주는 루프를 사용한다는 것입니다. 이전에 Rcpp를 사용했지만, 가장 기본적인 그래서 이것은 정확한 알고리즘의 논리에 관심이있는 John Ehlers의 Cycle Analytics for Traders에서 찾을 수 있습니다. 책은 링크의 앞부분을 참조하십시오. 물론, 첫 번째 티 알고리즘의 룩백 기간을 지정하는 것으로 알고리즘이 얼마나 잘 수행하는지 테스트하는 것입니다. 일부 데이터에서 실행하도록하십시오. 이제 알고리즘 설정 룩백 기간은 어떻게 생겼습니까? 시장이 어떤 격변을 겪었던 2001 년부터 2003 년까지 확대했다. 이 확대 된 이미지에서 우리는 알고리즘 추정치가 상당히 불안정하게 보일 수 있음을 알 수있다. 알고리즘의 n 추정치를 지표로 보내기위한 코드가있다. Ehlers의 자기 상관 기간 그래프에 설정된 동적 룩백 기간을 가진 표시기를 계산할 수 있습니다. 여기에 120 일에서 252 일 사이의 시간을 조정할 수있는 SMA와 함께 적용되는 함수가 있습니다. 이 알고리즘은 내가 원하는 것보다 일관성이 떨어집니다. 지금은이 코드를 남겨두고 사람들이 실험 해 보도록하겠습니다. 이 지표가 도움이된다는 사실을 누군가가 알게되기를 바랍니다. 감사합니다. 필라델피아 북동부의 네트워킹 모임에 관심 있음 YC 더욱이, 당신의 회사가 나의 기술로부터 이익을 얻는다 고 믿는다면, 저에게 다가 가기를 주저하지 말아주십시오. 나의 linkedin 프로필은 여기에서 찾을 수 있습니다. 대부분, quantocracy에 관한 책의 R 섹션을 큐레이팅하기 위해 자원하고 있습니다. 책이 있다면 금융에 적용 할 수있는 R에 관해서는 알려 주시기 바랍니다. 그래서 그것을 검토하고 추천 할 수 있습니다. Thakn you. 이 게시물은 Alpha Architect의 심층 검토 일 것입니다. Quantitative Momentum book 전반적으로, 내 의견, 이 책은 개인 공평 공간에서 자금 관리에 종사하는 사람들에게 훌륭한 책이지만 여전히 그 공간 외부에 대해 생각할 가치가있는 아이디어를 포함하고 있습니다. 그러나 책에서 설명 된 시스템은 X 축을 따라 중첩 순위 순위에서 이점을 얻습니다. 상위 십분 위, X에서 상위 십분 위 내에서 Y 축을 따라 순위를 매기고, Y 축을 따라 상위 십분 위를 선택하며, 본질적으로 우주의 선택을 1로 제한합니다. 또한 책은 변동성 제어를 접하는 것을 많이하지 않으며, 시스템이 크게 윤곽을 나타냈다. 나는이 포스트의 정면에 들어 가기 전에 독자들에게 공식적인 datacamp 코스 인 quantstrat 시리즈의 견과와 볼트를 공식화했다는 것을 알리고 싶다. Datacamp는 묶음을 배우는 아주 저렴한 방법이다. R과 금융 애플리케이션에 관한 주제가 있습니다. My course는 quantstrat의 기본을 다루고 있으며, 이와 같은 과정을 완료하는 사람들은 재무 과정이 잘 받아 들여지기를 희망하는 datacamp에 고급 quantstrat 모듈을 만들 수 있습니다. , RI의 재무 주제가 있기 때문에 45 분 강의가 David Matteson 박사의 변화 포인트 마술, PortfolioAnalytics 등과 같이 실제로 충분하지 않다는 것을 스스로 배우고 싶어합니다. 어떤 경우이든, 여기에 링크가 있습니다. 제 1 장은 왜 운동량이 효과적인지, 적어도 적어도 1993 년 이래로 적어도 20 년 동안 일 해왔다는 점에서 인간 편견과 비합리적인 행동이 특정한 방식으로 행동한다는 것을 보여주는 거대한 장들입니다 이례적인 직장 경력 위험 또한 위험 요소입니다. 따라서 벤치 마크가 SPY이고 3 년 실적이 저조하면 경력 상 심각한 위험이 있으며 본질적으로 전문 자산 매니저는 해고 당할 것입니다. 하지만 수년 동안 변칙을 고수하고 여러 해 동안 상대적으로 저조한 성과를 내면 장기간에 걸쳐 나올 것입니다. 일반적으로, 나는해야 할 일이있는 것처럼 느낍니다. 이것은 할 수있는 최선이지만, 좋아요, 나는 그것을 받아 들일 것입니다. 근본적으로, 제 1 부는 초창기를위한 것입니다. 운동량을 둘러싼 사람들이 몇 번 블록했을 때, 그들은 이것을 과거로 건너 뛸 수 있습니다. 불행히도, 그것은 절반입니다 그 책은 입안에 신맛이 조금 남았습니다. 다음으로, 제 2 부는 저의 의견으로는 실제 육류와 감자의 책입니다. 근본적으로 알고리즘은 다음과 같이 요약 될 수 있습니다 . 대형 및 중형주의 우주를 살펴보면서 다음을 수행하십시오 .1 그래서 매월 말에 2-12 모멘텀으로 주식을 십 분위수로 나누어 라. 마지막 달의 마감 가격에서 12 개월 전에 마감 가격을 뺀 모멘텀을 계산하라. 기본적으로, 리서치는 1 개월의 모멘텀 그러나이 효과는 내 경험상 ETF 세계로 이어지지는 않습니다 .2 다음은 십진법으로 분류 한 후에 책을 독자적으로 집어 들게하는 흥미로운 부분입니다. 다음 방정식에 의한 2-12 모멘텀의 부호는 음의 값을 반환합니다. 근본적으로 여기서의 아이디어는 운동량의 부드러움을 결정하는 것입니다. 가장 극단적 인 상황에서는 230 일 동안 아무 것도하지 않은 주식을 상상해보십시오. 그것이 그것의 전체적인 가격 절상을 준 날은 그것이 예상보다 나은 수치 보고서에서 10 번 뛰어 올랐을 때 구글을 생각하고, 다른 극단에서는 단순히 매일 매일 작은 긍정적 인 주가 가격 인상 분명히, 두 번째 유형의 주식을 원합니다. 다시 생각하면, 십 분위수로 정렬하고 상위 십분 위를 취하십시오. 따라서 상위 십 분위수의 상위 십분 위를 취하면 우주의 1을 남깁니다. 당신이 주식의 거대한 우주를 추적 할 필요가 있기 때문에 복제하기가 매우 어렵습니다. 그것은 표현이 실제로 변동성에 대비하여 꽤 좋은 아이디어라고 생각합니다. 즉, 자산의 휘발성에 관계없이 DBC와 같은 상품으로서의 휘발성이거나 SHY와 같은 고정 수입 제품과 같이 비 휘발성 인이 표현은이 경로가 고르지 않고이 경로가 부드럽다는 흥미로운 표현입니다. 앞으로 내 블로그에서이 표현을 조사 할 수도 있습니다 .3 마지막으로, 포트폴리오가 월 단위가 아닌 분기 단위로 전환하는 경우이를 전환 할 수있는 가장 좋은 달은 2 월, 5 월, 8 월, 11 월과 같은 아마추어 자산 관리자가 창문 대장에게 그 포트폴리오는 엉뚱한 분기를 가졌으므로 마지막 분기에 분기 별 성명서를 발송해야하기 전에 최근의 승자를 뽑아 내고 그들의 고객은 자신이 실제로 풀어주는 아마추어로 생각하지 않으며, 그리고 이것에 대한 충돌이 있습니다. 마찬가지로, 1 월에는 세금 손실 추수로 인해 일부 판매 예외가 있습니다. 실제 구현이 진행되는 한, 이것은 매우 좋은 촉감입니다. 매월 넘기는 것이 조금 비싸다는 사실을 인정하면서, 저는 Wes와 Jack이 말한 것처럼 확실히 3 개월마다 한 번씩 돌리고 싶습니다. 그러나 어느 달에 그것을 돌리고 싶습니다. 단지 1 년에 150 % 또는 그 이상의 추가 백분율을 얻는다는 것을 묻는 질문은 아주 좋은 질문입니다. 아마도 트랜잭션 비용을 처리 할 수있을 것입니다. 무엇보다도, 전략을 이해하는 것이 상당히 간단합니다. 그러나 완벽한 복제로 책에서 벗어나는 부분은 CRSP 데이터를 얻는 데 어려움이 있습니다. 그러나 Alpha T를 공개하는 건축가 또한 처음부터 끝까지 전체 알고리즘을 학습 할 수 있습니다. 또한 기본 2-12 모멘텀이 충분하지 않으면 다른 유형의 모멘텀 아이디어 수입 모멘텀을 자세히 설명하고 52 주 최고로의 거리, 절대적 역사적 모멘텀 등으로 부록을 작성합니다 이 전략들 중 어느 것도 정말로 기본적인 가격 모멘텀 전략보다 훨씬 뛰어나지 않으므로 관심있는 사람들을 위해 거기서 다시 돌아올 것입니다. 그러나 실제로 거기에는 아무 것도 보이지 않습니다. 즉, 한 달에 한 번 거래를한다면, 이봐 요, 나는이 일이 일어나고 있다고 생각합니다. Wes와 Jack은 경향 추종이 전반적인 CAGR이나 Sharpe를 향상시키지 않지만 최대 삭감을 개선하기 위해 엄청난 양을한다는 사실에 대해서도 언급했습니다. 모든 것의 70-80을 잃어 버리고 30 개를 잃을 것이라는 전망에 직면한다면 그것은 쉬운 선택입니다. 트렌드 - 팔로 잉은 단순하고 심지어 단순한 버전입니다. 전체적으로 생각해 보면이 책은 해야 할 일은 잘 연구 된 lgorithm 궁극적으로, punchline은 Alpha Architect 사이트에 있습니다. 나는 그들이 일종의 월별 주식 필터를 가지고 있다고 믿습니다. 또한, 양적 가치 책자에 설명 된 알고리즘과 결합했을 때 위험 조정 수익률이 더 좋다고 책은 말합니다. 가치 알고리즘은 내가해온 백 테스트에서 나를 감동시키지 못했지만, 다양한 평가 메트릭으로는 나에게 미숙하다는 것을 알 수 있습니다. 그러나이 책에 대한 비판은 이것입니다. 이 책의 운동량 알고리즘은 무엇을 그리워합니까? 나는 종이 모멘텀은 나의 가설 중심 개발 시리즈에서 다루었던 그 순간을 가지고있다. 보통의 파마 - 프렌치 모멘텀 전략은 디 레버 리징에 의한 리스크 보상 전략보다 훨씬 뛰어나다. 과도한 변동성의시기에는 운동량 추락을 피하면서 Wes와 Jack이이 논문을 읽지 않은 이유를 모르겠다. 구현은 매우 단순한 목표이기 때문에 ealized volatility leverage factor 이상적으로는, 웨스 (Wes) 또는 잭이이 전략에 대한 수익의 흐름을 바람직하게는 매일 보내 줄 수 있지만 매월 또한 작동한다고 생각합니다. 근본적으로이 책은 매우 포괄적이라고 생각하지만, 그것을 복제하는 데 필요한 데이터로 인해 집에서 이것을 시험해보십시오 확실히 브로커가 거래를 청구하는 경우 트랜잭션 비용에 수천 달러를 쏟아 부을 수있는 실행 가능한 전략이 아닙니다. 귀하의 중개인 그러나 Alpha Architect의 QMOM ETF가 사실상 관리 비용을 제외한이 전략의 더 나은 버전인지 궁금합니다. 어떤 경우이든이 책은 약간 남겨두고있는 반면에 테이블 전체에 대한 지식은 전체적으로 그것이하는 일을 성취하고 절차에 따라 명확하며 몇 가지 보람있는 아이디어를 제공합니다 비 기술적 인 교과서 일명 아마존에 관한 60 ​​권의 책이라는 가격에 대해이 책은 도둑질. 읽어 주셔서 감사합니다. 나는 현재 성공적인 분석 능력에 종사하고 있지만이 블로그의 주제와 더 밀접한 관련이있는 전임 직위에 관심이 있습니다. 현재의 기술로 혜택을받을 수있는 전임 직위를 갖고 있다면 내 Linkedin을 여기에서 찾을 수 있습니다. 이 게시물은 온라인 상태 예측을 위해 Depmix 패키지를 사용하려고하는 것입니다. depmix 패키지는 과거 상태를 설명 할 때 훌륭하게 수행되지만, 한 걸음 앞당겨 사용할 때 예측은 내일의 상태가 오늘과 동일 할 것이라는 가정하에 패키지 내에있는 숨겨진 마코프 모델 프로세스가 기대에 미치지 못합니다. 그래서이 게시물은 처음에 Michael Halls-Moore가 최근에 게시 한 동기였습니다. depmixS4 라이브러리를 사용하여 숨겨진 마코프 모델을 사용하는 방법에 대한 몇 가지 R 코드 일반적으로 필자는 필자가 완전히 앞뒤로 이해하고 있다고 느끼지 않는 주제에 대한 게시물을 작성하는 것을 싫어하지만 필자는이를 학습의 희망으로하고 있습니다. 온라인 상태 공간 예측이나 정권 전환 탐지를 적절하게 수행하는 방법에 대해 다른 사람들과 의견을 나누는 것이 더 경제적 인 말투로 불릴 수 있습니다. 숨겨진 마코프 모델의 일반적인 이론을 보았을 때, 비가 내릴 수도 있고 그렇지 않을 수도 있습니다 날씨가 좋을 때만 날씨를 추측 할 수 있지만, 일어 났을 때 창 밖에서 입는 사람들이 보는 옷의 판단은 가능합니다. MOOCs에서 장난감 예제를 사용했습니다. Udacity의자가 운전 코스가 올바르게 처리되었거나 또는 하루가 끝날 때 AI 과정이었습니다. 이론은 구현이 실제 데이터에서 얼마나 잘 작동하는지에 달린 것만 큼 좋은 것입니다. 이 실험에서는 처음부터 SPY 데이터를 가져 와서 전체 샘플 내 백 테스트를 수행하기로 결정했습니다. 데이터, 즉 depmix의 HMM 알고리즘이 데이터의이 신의 눈으로 볼 때 반환의 전체 기록을 보았을 때 알고리즘은 백 테스트 결과가 표시되면 정권을 정확하게 분류합니다. 코드는 다음과 같습니다. 그렇게하려면 Halls-Moore 박사가 영감을 얻었습니다. 나는 세 가지 상태를 선택했지만, 제로를 넘는 요격이있는 것은 황소 상태이고 0 이하는 곰 상태이므로 근본적으로 두 상태로 줄어 듭니다. 결과와 함께, 특히 끔찍한 것은 아닙니다. 알고리즘은 작동합니다. 일종의, 맞아요. 음, 이제 온라인 예측을 해보 죠. 내가 여기서 한 일은 스파이 창설 이래로 500 일을 시작으로 확장 창을 띄우고, 하루가 갈수록 늘었습니다. 시간은 내 예측, trivially, 가장 최근의 하루, 황소 상태에 대한 1을 사용하고, 곰 상태에 대해 -1 리눅스 클러스터에서 병렬 로이 프로세스를 실행, 왜냐하면 Windows의 doParallel 라이브러리가 아닌 것 같습니다 심지어 특정 패키지가로드되고 더 지저분하다는 것을 알고 있더라도 첫 번째 큰 문제는이 프로세스가 약 23 년 동안 7 코어에서 약 3 시간이 걸렸다는 것입니다. 정확히 고무되지는 않지만 요즘 컴퓨팅 시간은 비쌉니다. 이 프로세스가 실제로 작동하는지 봅시다. 먼저, 알고리즘이 수행되는지 테스트합니다. 그것은 실제로 실제로 이루어져야하고 언젠가 look-ahead 바이어스를 사용합니다. 알고리즘은 하루가 끝날 때 그 상태가 얼마나 정확한지를 알고리즘에 알려줍니다. 결과와 함께. 알고리즘은 주어진 데이터 세트에 대한 상태를 분류하기 위해 설계된 작업을 수행하는 것으로 보입니다. 이제, 이러한 관련 예측이 하루 만에 얼마나 잘 수행되는지에 대한 가장 적절한 질문입니다. 주 공간 예측이 하루에서 간략 할 것이라고 생각합니다. 결과는 바로 그 결과입니다. 즉, 미리보기 바이어스가 없으면 상태 공간 예측 알고리즘은 왜곡됩니다. 왜 그런가요? 음, 여기에 상태의 음모가 있습니다. 간단히 말해서, 온라인 hmm 알고리즘 depmix 패키지에서 실제 거래 전략에 대한 명백한 부정적 함의와 함께 매우 쉽게 마음이 바뀌는 것처럼 보입니다. 그래서, 이 게시물에 대해이를 요약합니다. 기본적으로, 여기서 주요 메시지는 설명 적 분석을 수행하는로드간에 큰 차이가 있다는 것입니다. 너는 왜 그런가? 미래 예측을 올바르게하면 미래의 긍정적 인 결과가 나올 것이라는 예측 분석과 비교할 때 내 생각에 서술 통계는 왜 전략이 어떻게 수행했는지 설명하는 데 목적이 있습니다. 궁극적으로 우리는 항상 찾고 있습니다. 더 나은 예측 도구이 경우, depmix, 적어도이 out-of-the-box 데모에서는 도구로 보이지 않습니다. 누군가가 depmix 또는 다른 regime-switching 알고리즘을 사용하여 예측에 성공한 경우, 나는 툴박스를 확장하고자하는 영역이기 때문에 절차를 자세하게 설명하는 작업을보고 싶지만 특별한 리드는 없다. 필자는이 포스트를 나 자신의 경험을 묘사하는 것으로 생각하고 싶다. 독서를위한 감사합니다. 10 월 5 일에, 나는 뉴욕시에있을 것입니다. 10 월 6 일에, 나는 프로그래밍 전쟁 패널의 The Trading Show에서 발표 할 것입니다. 참고로, 현재의 분석 계약은 결국 검토를위한 것입니다. 나는 공식적으로 찾고 있습니다. 다른 제안들에 대해서도 당신이 내 블로그에서 볼 수있는 스킬의 혜택을받을 수있는 전임의 역할을 가지고 있다면 나와 함께 연락해주십시오. 내 linkedin 프로필은 여기에서 찾을 수 있습니다. 이 게시물은 발견 된 위험 요소에서 조건부 가치를 소개합니다 Brian Peterson, Krist Boudt 및 Peter Carl이 작성한 보고서에서 PerformanceAnalytics를 사용합니다. 이는 포트폴리오에 적용 할 때 예상되는 자산 수익 부족분을 컴퓨팅 구성 요소에 대해 쉽게 호출 할 수있는 메커니즘입니다. 정확한 메커니즘은 상당히 복잡하지만 , 실행 시간은 거의 순간적이며, 이 방법은 자산 할당 분석에 포함하기위한 견고한 도구입니다. 위험에 처한 구성 요소 조건부 가치의 직관에 대한 심층 분석에 관심이있는 사람들을 위해 Brian Peterson, Peter Carl 및 Kris Boudt가 쓴 글입니다. 기본적으로 주어진 포트폴리오의 모든 자산은 예상되는 위험도로 총 조건부 가치에 기여합니다 부족분, 즉 손실이 일정한 기준치를 초과 할 때 예상되는 손실 예를 들어, 예상 부족액을 알고 싶다면 100 일당 최악의 수익률 5 점을 평균 한 것입니다. 예상 누적량에 대한 아이디어는 1 년 이내에 충분히 빠른 시간 내에 데이터가 충분하지 않을 것으로 예상 할 수 있습니다. PerformanceAnalytics에서 기대 부족분에 대한 공식은 기본적으로 Cornish-Fisher 확장을 사용하는 근사치 계산으로 설정됩니다. p 값이 너무 높지 않은 한 좋은 결과, 1 -10 범위와 같은 상대적으로 정상적인 p 값을 위해 작동합니다. 조건부 위험 값은 입력 가중치가없는 경우 두 가지 용도가 먼저 사용되며 동일한 가중치를 사용합니다 기본값은 연구자가 자신의 상관 관계 공분산 방법을 만들기 위해 부담을주지 않으면 서 각 개별 자산에 대한 위험 추정치를 제공 할 수 있습니다. 둘째, 가중치 집합을 제공 할 때 outpu 이러한 가중치에 비례하여 다양한 자산의 기여도를 반영하도록 변경이 방법론은 모멘텀을 기반으로 자산을 제외하는 전략에 매우 적합하지만 나머지 자산에 가중치 체계가 필요함을 의미합니다. 또한이 방법론을 사용하면 위험 기여도에 대한 사후 분석을 통해 위험 요소에 기여하는 요소를 확인합니다. 첫째, edhec 데이터 세트를 사용하여 메커니즘이 작동하는 방식에 대한 데모입니다. 여기에는 전략이 없으며 구문의 데모 만 제공됩니다. edhec 데이터 세트에있는 자금의 비율입니다. 그래서 tmp는 전체 기간 동안 다양한 edhec 관리자 각각의 예상 부족분에 기여합니다. 여기에 출력이 있습니다. 이 부분의 가장 중요한 부분은 최종 출력의 기여도입니다. 다른 사람들이 잃을 때 특정 펀드가 얻는다는 것을 의미하는 음수 일 것입니다. 적어도 이것은 현재 데이터 세트에 대한 것입니다. 이 자산들은 포트폴리오를 다양 화하고 실제로는 더 낮습니다. 이 경우 에드게크 데이터 세트의 처음 10 명의 관리자에게 동등하게 가중치를 부여하고 마지막 세 개의 가중치를 0으로 설정합니다. 또한 가중치가 같지 않을 때 어떤 일이 발생하는지 볼 수 있습니다. 이번에는 가중치 컨버터블 매니저가 증가 했으므로 최대 기대 부족분에 대한 그의 기여도 역시 증가했습니다. 미래의 백 테스트를 위해 필자는 Faber의 글로벌 자산 배분 책에서 발견 한 우주를 사용하고 싶습니다. 즉, 시뮬레이션 그 책에서 1972 년으로 돌아가서 그 자산 지수에 대해 일일 수익을 올리는 사람이 있는지 궁금 해서요. 일부 ETF는 2000 년대 초반으로 돌아가는 반면, DBC 상품, 2006 년 초, GLD 금 , 2004 년 초, BWX 외국 채권, 2007 년 말, FTY NAREIT, 2007 년 초 8 년의 역 테스트가 다소 짧을 것이므로 누구든지 더 많은 역사가있는 데이터가 있는지 궁금해하고있었습니다. 다른 한가지는 뉴욕에 있습니다. 무역 쇼와 연설 10 월 6 일 프로그래밍 전쟁 패널. 독서 고마워요. 참고 현재 계약 중일 때, 현재 계약이 끝날 때 내 기술로 이익을 얻을 수있는 영구적 인 지위를 찾고 있습니다. 귀하가 그러한 개통을했거나 알고있는 경우, 나는 당신과 이야기하게되어 기쁠 것입니다. 이 글에서는 SeekingAlpha의 해리 롱 (Harry Long) 유형 리 밸런싱 전략을 단순화하는 기능에 대해 알아 봅니다. 해리 롱 (Harry Long)이 말했듯이, 대부분의 전략이 그렇지는 않더라도 실제 전략보다는 실증적 목적을위한 것입니다. 해리 롱이 Seeknig Alpha에 기사를 더 게시 한 이후 독자 나 두 사람이 그의 전략을 다시 분석 해달라고 부탁했습니다. 그러나 그 대신에이 도구를 여기에 두겠습니다. 데이터 수집을 자동화하고 한 줄의 코드로 포트폴리오 재조정을 시뮬레이션하는 래퍼입니다. 여기 도구입니다. 일반적으로 Yahoo의 데이터를 가져 오지만, Helumth Vollmeier에게 큰 감사를드립니다 ZIV 및 VXX를 사용하고 단순히 주식 곡선 및 몇 가지 통계 CAGR, 연간 표준 dev, Sharpe, 최대 Drawdown, Calmar를 표시하거나 R로 더 많은 분석을 수행하려는 경우 반환 스트림을 출력으로 제공하는 옵션이 있습니다. 여기에는 80 XLP SPLV를 사용하여 통계를 얻는 예가 더 많거나 적게 호환되며 20 TMF 일명 60 TLT이므로 Harry Long의 기사에서 80 60 포트폴리오를 볼 수 있습니다. 우리는 균형 잡힌 주식 채권 포트폴리오에서 기대할 수 있습니다. 일반적으로 잘되고, 금융 위기에서 가장 큰 손실을 감수하고, 도로에서 다른 충돌이 있지만, 전반적으로 나는 바닐라가 그것을 설정하고 그것을 잊어 버리는 것 같아요. 그리고 여기 당신이이 두 도구를 매주 rebalance하기를 원한다고 가정 할 때, 일일 수익률의 흐름을 얻는 방법이 될 것입니다. 그리고 이제는 통계를 얻으려고합니다. 재조정을 매년에서 매주로 이동하지 않았습니다. 효과가 크다. 여기에 당신이 중개인에게 돈을 많이 줄뿐만 아니라, 트랜잭션 비용을 고려한다면, 이 방법은 효과가 없습니다. 결과는 물론 최신 장비의 시작에서부터 시작됩니다. 트릭 는 SPLV에서 80 배 할당으로 TMF에서 20 배 대신 XLP로 80 배의 TLT에서 60 배를 사용하는 것과 같이 단순히 레버리지 ETF 인 새로운 ETF에 대해 더 긴 내역을 가진 프록시 대체를 찾으려고 시도하는 것입니다. 여기에 몇 가지 프록시가 있습니다. SPXL XLP SPXL UPRO SPY 3 TMF TLT 3. 전에 해리 롱에서 일했고, 그는 더 복잡한 전략을 개발했습니다. 그래서 SeekingAlpha 독자가 공개적으로 공개 한 전략을 개념으로 사용하는 것이 좋습니다. 시위를 할 수있는 기회가 될 것입니다. 귀하가 관심이 있다면 투자 기관을 위해보다 맞춤화 된 민간 솔루션에 관해 Mr. Long에게 연락하십시오. 감사합니다. 북동부에 있습니다. 현재 현재 계약 중입니다. ng, 잠재적 인 공동 작업 기회와 관련하여 개인 또는 회사와의 네트워킹에 관심이 있습니다. 이 게시물은 PerformanceAnalytics를 사용하여 반품 기반 데이터를 처리 할 때 매출액을 고려하는 방법과 R의 기능을 보여줍니다. 첫째로, 이것은 R-SIG-Finance 메일 링리스트에 Robert Wages가 제기 한 질문에 대한 응답입니다. 많은 사람들이 많이 있지만 많은 것들이 발견 될 수 있습니다 이 블로그에서 이미 시범을 보이기 위해, 때때로, 스탠포드 대학의 박사 학위 통계학 학생이나, 이 블로그에서 아직 다루지 않은 주제에 대한 질문을하는 매우 지능적인 개인이있을 것입니다. R에서 발견 된 또 다른 기술적 인 측면을 보여주는 글이 시간 중 하나입니다. 따라서이 데모는 PerformanceAnalytics 패키지 S를 사용하여 반품 공간에서 회전율 계산에 관한 것입니다. PortfolioAnalytics 패키지 외부의 PerformanceAnalytics는 일련의 가중치, 수익률 집합을 취하고 나머지 분석을위한 포트폴리오 수익률을 생성 할 수 있으므로 포트폴리오 관리 시뮬레이션을위한 Go-to R 패키지입니다. PerformanceAnalytics s functions. Again의 전략은 현재 4 자 ETF가 있기 때문에 9 개의 3 문자 섹터 SPDR을 취하는 전략이며, 매월 말에 조정 된 가격이 200 일 이동 평균을 초과하면 투자 그것으로 모든 투자 한 분야의 맞은 편에 정상화하십시오, 모든 9로 투자되는 경우에 9 번째, 1 개로 투자 된 경우에 1, 100으로 현금으로, 섹터가 투자되지 않는 경우에 0 반환 벡터로 표시되는 간단한 것, 장난감 전략, 전략이 시연의 요점이 아니므로 기본 설정 코드가 있습니다. SPDR을 얻고, 함께 넣고, 계산하고, 신호를 생성하고, 제로 벡터를 만듭니다. 탈퇴 및 1보다 큰 가중치는 여기에 더 많은 자본을 추가하십시오. 여기에 회전율을 계산하는 방법이 있습니다. 그래서, 당신이 자세한 TRUE 옵션을 사용하여 호출 할 때의 트릭입니다. 반환 사이에 여러 객체가 생성됩니다. 기간 매출액의 끝. 매출액 계산 방식은 이전 수익에서 할당 된 포트폴리오가 이전 종료 시점에서 실제로 다음 기간의 시작 시점에있는 지점으로의 이동 방법의 차이입니다. 즉, 기간 가중치의 끝 해당 자산에 대한 일의 드리프트 수익을 고려한 후 기간 편차의 시작입니다. 기간 가중치의 새로운 시작 부분은 기간 가중치의 마지막에 수행 된 모든 거래를 더한 것입니다. 따라서 실제 거래 또는 매출액을 찾으려면, 하나는 기간 가중치의 처음부터 기간 가중치의 이전 끝을 뺍니다. 이것은이 거래에 대해 이러한 거래가 어떻게 생겼는지를 나타냅니다. 우리가 그러한 데이터로 할 수있는 일은 1 년의 회전율, 이것은 다음과 같습니다. 이는 다음과 같습니다. 이것은 본질적으로 1 년 동안의 양방향 매출액을 의미합니다. 즉, 전액 투자 된 포트폴리오를 100 회 매출 한 경우 완전히 새로운 자산 집합을 구매하면 또 다른 100입니다. 양방향 회전율은 200입니다. 최대 약 800입니다. 일부 사람들에게는 꽤 높을 수 있습니다. 이제 트랜잭션 당 비용을 20bps로 거래했을 때 100 % 거래하는 데 20 센트가들 것입니다. 따라서 거래 비용에 대한 20 베이시스 포인트에서, 이는 1 년에 약 1 %의 수익률을 인정합니다. 끔찍한 전략입니다. 이것은 무시할 수없는 수준입니다. 그래서 실제로 매출액과 거래 비용을 계산하는 방법입니다. 이 경우 거래 cost model was very simple However, given that returns transactions at the individual asset level, one could get as complex as they would like with modeling the transaction costs. Thanks for reading. NOTE I will be giving a lightning talk at R Fi nance, so for those attending, you ll be able to find me there. This post will outline an easy-to-make mistake in writing vectorized backtests namely in using a signal obtained at the end of a period to enter or exit a position in that same period The difference in results one obtains is massive. Today, I saw two separate posts from Alpha Architect and Mike Harris both referencing a paper by Valeriy Zakamulin on the fact that some previous trend-following research by Glabadanidis was done with shoddy results, and that Glabadanidis s results were only reproducible through instituting lookahead bias. The following code shows how to reproduce this lookahead bias. First, the setup of a basic moving average strategy on the S P 500 index from as far back as Yahoo data will provide. And here is how to institute the lookahead bias. These are the results. Of course, this equity curve is of no use, so here s one in log scale. As can be seen, lookahead bias makes a massive difference. Here are the numeric al results. Again, absolutely ridiculous. Note that when using the function in PerformanceAnalytics , that package will automatically give you the next period s return, instead of the current one, for your weights However, for those writing simple backtests that can be quickly done using vectorized operations, an off-by-one error can make all the difference between a backtest in the realm of reasonable, and pure nonsense However, should one wish to test for said nonsense when faced with impossible-to-replicate results, the mechanics demonstrated above are the way to do it. Now, onto other news I d like to thank Gerald M for staying on top of one of the Logical Invest strategies namely, their simple global market rotation strategy outlined in an article from an earlier blog post. Up until March 2015 the date of the blog post , the strategy had performed well However, after said date. It has been a complete disaster, which, in hindsight, was evident when I passed it through the hypothesis-dri ven development framework process I wrote about earlier. So, while there has been a great deal written about not simply throwing away a strategy because of short-term underperformance, and that anomalies such as momentum and value exist because of career risk due to said short-term underperformance, it s never a good thing when a strategy creates historically large losses, particularly after being published in such a humble corner of the quantitative financial world. In any case, this was a post demonstrating some mechanics, and an update on a strategy I blogged about not too long ago. Thanks for reading. NOTE I am always interested in hearing about new opportunities which may benefit from my expertise, and am always happy to network You can find my LinkedIn profile here. This post will shed light on the values of R 2s behind two rather simplistic strategies the simple 10 month SMA, and its relative, the 10 month momentum which is simply a difference of SMAs, as Alpha Architect showed in th eir book DIY Financial Advisor. Not too long ago, a friend of mine named Josh asked me a question regarding R 2s in finance He s finishing up his PhD in statistics at Stanford, so when people like that ask me questions, I d like to answer them His assertion is that in some instances, models that have less than perfect predictive power EG R 2s of 4, for instance , can still deliver very promising predictions, and that if someone were to have a financial model that was able to explain 40 of the variance of returns, they could happily retire with that model making them very wealthy Indeed 4 is a very optimistic outlook to put it lightly , as this post will show. In order to illustrate this example, I took two staple strategies buy SPY when its closing monthly price is above its ten month simple moving average, and when its ten month momentum basically the difference of a ten month moving average and its lag is positive While these models are simplistic, they are ubiquitously talked about, a nd many momentum strategies are an improvement upon these baseline, out-of-the-box strategies. Here s the code to do that. And here are the results. In short, the SMA10 and the 10-month momentum aka ROC 10 aka MOM10 both handily outperform the buy and hold, not only in absolute returns, but especially in risk-adjusted returns Sharpe and Calmar ratios Again, simplistic analysis, and many models get much more sophisticated than this, but once again, simple, illustrative example using two strategies that outperform a benchmark over the long term, anyway. Now, the question is, what was the R 2 of these models To answer this, I took a rolling five-year window that essentially asked how well did these quantities the ratio between the closing price and the moving average 1, or the ten month momentum predict the next month s returns That is, what proportion of the variance is explained through the monthly returns regressed against the previous month s signals in numerical form perhaps not the best framing, as the signal is binary as opposed to continuous which is what is being regressed, but let s set that aside, again, for the sake of illustration. Here s the code to generate the answer. And the answer, in pictorial form. In short, even in the best case scenarios, namely, crises which provide momentum trend-following call it what you will its raison d etre, that is, its risk management appeal, the proportion of variance explained by the actual signal quantities was very small In the best of times, around 20 But then again, think about what the R 2 value actually is it s the percentage of variance explained by a predictor If a small set of signals let alone one was able to explain the majority of the change in the returns of the S P 500, or even a not-insignificant portion, such a person would stand to become very wealthy More to the point, given that two strategies that handily outperform the market have R 2s that are exceptionally low for extended periods of time, it goes to sho w that holding the R 2 up as some form of statistical holy grail certainly is incorrect in the general sense, and anyone who does so either is painting with too broad a brush, is creating disingenuous arguments, or should simply attempt to understand another field which may not work the way their intuition tells them. Thanks for reading. This review will review the Adaptive Asset Allocation Dynamic Global Portfolios to Profit in Good Times and Bad book by the people at ReSolve Asset Management Overall, this book is a definite must-read for those who have never been exposed to the ideas within it However, when it comes to a solution that can be fully replicated, this book is lacking. Okay, it s been a while since I reviewed my last book, DIY Financial Advisor from the awesome people at Alpha Architect This book in my opinion, is set up in a similar sort of format. This is the structure of the book, and my reviews along with it. Part 1 Why in the heck you actually need to have a diversified p ortfolio, and why a diversified portfolio is a good thing In a world in which there is so much emphasis put on single-security performance, this is certainly something that absolutely must be stated for those not familiar with portfolio theory It highlights the example of two people one from Abbott Labs, and one from Enron, who had so much of their savings concentrated in their company s stock Mr Abbott got hit hard and changed his outlook on how to save for retirement, and Mr Enron was never heard from again Long story short a diversified portfolio is good, and a properly diversified portfolio can offset one asset s zigs with another asset s zags This is the key to establishing a stream of returns that will help meet financial goals Basically, this is your common sense story humans love being told stories so as to motivate you to read the rest of the book It does its job, though for someone like me, it s more akin to a big wait for it, wait for it and there s the reason why we should read on, as expected. Part 2 Something not often brought up in many corners of the quant world because it s real life boring stuff is the importance not only of average returns, but when those returns are achieved Namely, imagine your everyday saver At the beginning of their careers, they re taking home less salary and have less money in their retirement portfolio or speculation portfolio, but the book uses retirement portfolio As they get into middle age and closer to retirement, they have a lot more money in said retirement portfolio Thus, strong returns are most vital when there is more cash available to the portfolio, and the difference between mediocre returns at the beginning and strong returns at the end of one s working life as opposed to vice versa is astronomical and cannot be understated Furthermore, once in retirement, strong returns in the early years matter far more than returns in the later years once money has been withdrawn out of the portfolio though I d hope that a po rtfolio s returns can be so strong that one can simply live off the interest Or, put more intuitively when you have 10,000 in your portfolio, a 20 drawdown doesn t exactly hurt because you can make more money and put more into your retirement account But when you re 62 and have 500,000 and suddenly lose 30 of everything, well, that s massive How much an investor wants to avoid such a scenario cannot be understated Warren Buffett once said that if you can t bear to lose 50 of everything, you shouldn t be in stocks I really like this part of the book because it shows just how dangerous the ideas of a 50 drawdown is unavoidable and other stay invested for the long haul refrains are Essentially, this part of the book makes a resounding statement that any financial adviser keeping his or her clients invested in equities when they re near retirement age is doing something not very advisable, to put it lightly In my opinion, those who advise pension funds should especially keep this section o f the book in mind, since for some people, the long-term may be coming to an end, and what matters is not only steady returns, but to make sure the strategy doesn t fall off a cliff and destroy decades of hard-earned savings. Part 3 This part is also one that is a very important read First off, it lays out in clear terms that the long-term forward-looking valuations for equities are at rock bottom That is, the expected forward 15-year returns are very low, using approximately 75 years of evidence Currently, according to the book, equity valuations imply a negative 15-year forward return However, one thing I will take issue with is that while forward-looking long-term returns for equities may be very low, if one believed this chart and only invested in the stock market when forecast 15-year returns were above the long term average, one would have missed out on both the 2003-2007 bull runs, and the one since 2009 that s just about over So, while the book makes a strong case for caution, r eaders should also take the chart with a grain of salt in my opinion However, another aspect of portfolio construction that this book covers is how to construct a robust assets for any economic environment and coherent asset classes balanced in number universe for implementation with any asset allocation algorithm I think this bears repeating universe selection is an extremely important topic in the discussion of asset allocation, yet there is very little discussion about it Most research topics simply take some conventional universe , such as all stocks on the NYSE , or all the stocks in the S P 500 , or the entire set of the 50-60 most liquid futures without consideration for robustness and coherence This book is the first source I ve seen that actually puts this topic under a magnifying glass besides finger in the air pick and choose. Part 4 and here s where I level my main criticism at this book For those that have read Adaptive Asset Allocation A Primer this section of the book is basically one giant copy and paste It s all one large buildup to momentum rank min-variance optimization All well and good, until there s very little detail beyond the basics as to how the minimum variance portfolio was constructed Namely, what exactly is the minimum variance algorithm in use Is it one of the poor variants susceptible to numerical instability inherent in inverting sample covariance matrices Or is it a heuristic like David Varadi s minimum variance and minimum correlation algorithm The one feeling I absolutely could not shake was that this book had a perfect opportunity to present a robust approach to minimum variance, and instead, it s long on concept, short on details While the theory of maximize return for unit risk is all well and good, the actual algorithm to implement that theory into practice is not trivial, with the solutions taught to undergrads and master s students having some well-known weaknesses On top of this, one thing that got hammered into my head in t he past was that ranking also had a weakness at the inclusion exclusion point E G if, out of ten assets, the fifth asset had a momentum of say, 10 9 , and the sixth asset had a momentum of 10 8 , how are we so sure the fifth is so much better And while I realize that this book was ultimately meant to be a primer, in my opinion, it would have been a no-objections five-star if there were an appendix that actually went into some detail on how to go from the simple concepts and included a small numerical example of some algorithms that may address the well-known weaknesses This doesn t mean Greek mathematical jargon Just an appendix that acknowledged that not every reader is someone only picking up his first or second book about systematic investing, and that some of us are familiar with the whys and are more interested in the hows Furthermore, I d really love to know where the authors of this book got their data to back-date some of these ETFs into the 90s. Part 5 some more formal research on topics already covered in the rest of the book namely a section about how many independent bets one can take as the number of assets grow, if I remember it correctly Long story short You easily get the most bang for your buck among disparate asset classes, such as treasuries of various duration, commodities, developed vs emerging equities, and so on, as opposed to trying to pick among stocks in the same asset class though there s some potential for alpha there just a lot less than you imagine So in case the idea of asset class selection, not stock selection wasn t beaten into the reader s head before this point, this part should do the trick The other research paper is something I briefly skimmed over which went into more depth about volatility and retirement portfolios, though I felt that the book covered this topic earlier on to a sufficient degree by building up the intuition using very understandable scenarios. So that s the review of the book Overall, it s a very solid piece of writing, and as far as establishing the why , it does an absolutely superb job For those that aren t familiar with the concepts in this book, this is definitely a must-read, and ASAP. However, for those familiar with most of the concepts and looking for a detailed how procedure, this book does not deliver as much as I would have liked And I realize that while it s a bad idea to publish secret sauce, I bought this book in the hope of being exposed to a new algorithm presented in the understandable and intuitive language that the rest of the book was written in, and was left wanting. Still, that by no means diminishes the impact of the rest of the book For those who are more likely to be its target audience, it s a 5 5 For those that wanted some specifics, it still has its gem on universe construction. Overall, I rate it a 4 5.Thanks for reading. Happy new year This post will be a quick one covering the relationship between the simple moving average and time series momentum The implication is that one can potentially derive better time series momentum indicators than the classical one applied in so many papers. Okay, so the main idea for this post is quite simple. I m sure we re all familiar with classical momentum That is, the price now compared to the price however long ago 3 months, 10 months, 12 months, etc E G P now P 10 And I m sure everyone is familiar with the simple moving average indicator, as well E G SMA 10.Well, as it turns out, these two quantities are actually related. It turns out, if instead of expressing momentum as the difference of two numbers, it is expressed as the sum of returns, it can be written for a 10 month momentum as. MOM10 return of this month return of last month return of 2 months ago return of 9 months ago, for a total of 10 months in our little example. This can be written as MOM10 P 0 P 1 P 1 P 2 P 9 P 10 Each difference within parentheses denotes one month s worth of returns. Which can then be rewritten by associative arithmetic as P 0 P 1 P 9 P 1 P 2 P 10.In other words, momentum aka the difference between two prices, can be rewritten as the difference between two cumulative sums of prices And what is a simple moving average Simply a cumulative sum of prices divided by however many prices summed over. Here s some R code to demonstrate. With the resulting number of times these two signals are equal. In short, every time. Now, what exactly is the punchline of this little example Here s the punchline. The simple moving average is fairly simplistic as far as filters go It works as a pedagogical example, but it has some well known weaknesses regarding lag, windowing effects, and so on. Here s a toy example how one can get a different momentum signal by changing the filter. With the following results. While the difference of EMA10 strategy didn t do better than the difference of SMA10 aka standard 10-month momentum , that s not the point The point is that the momentum signal is derived from a simple moving average filter, and that by using a different filter, one can still use a momentum type of strategy. Or, put differently, the main general takeaway here is that momentum is the slope of a filter, and one can compute momentum in an infinite number of ways depending on the filter used, and can come up with a myriad of different momentum strategies. Thanks for reading. NOTE I am currently contracting in Chicago, and am always open to networking Contact me at my email at or find me on my LinkedIn here. Post navigation. Category Archives Trading. This post will introduce John Ehlers s Autocorrelation Periodogram mechanism a mechanism designed to dynamically find a lookback period That is, the most common parameter optimized in backtests is the lookback period. Before beginning this post, I must give credit where it s due, to one Mr Fabrizio Maccallini the head of structured derivatives at Nordea Markets in London You can find the rest of the repository he did for Dr John Ehlers s Cycle Analytics for Traders on his github I am grateful and honored that such intelligent and experienced individuals are helping to bring some of Dr Ehlers s methods into R. The point of the Ehlers Autocorrelation Periodogram is to dynamically set a period between a minimum and a maximum period length While I leave the exact explanation of the mechanic to Dr Ehlers s book, for all practical intents and purposes, in my opinion, the punchline of this method is to attempt to remove a massive source of overfitting from trading system creation namely specifying a lookback period. SMA of 50 days 100 days 200 days Well, this algorithm takes that possibility of overfitting out of your hands Simply, specify an upper and lower bound for your lookback, and it does the rest How well it does it is a topic of discussion for those well-versed in the methodologies of electrical engineering I m not , so feel free to leave comments that discuss how well the algorithm does its job, and feel free to blog about it as well. In any case, here s the orig inal algorithm code, courtesy of Mr Maccallini. One thing I do notice is that this code uses a loop that says for i in 1 length filt , which is an O data points loop, which I view as the plague in R While I ve used Rcpp before, it s been for only the most basic of loops, so this is definitely a place where the algorithm can stand to be improved with Rcpp due to R s inherent poor looping. Those interested in the exact logic of the algorithm will, once again, find it in John Ehlers s Cycle Analytics For Traders book see link earlier in the post. Of course, the first thing to do is to test how well the algorithm does what it purports to do, which is to dictate the lookback period of an algorithm. Let s run it on some data. Now, what does the algorithm-set lookback period look like. Let s zoom in on 2001 through 2003, when the markets went through some upheaval. In this zoomed-in image, we can see that the algorithm s estimates seem fairly jumpy. Here s some code to feed the algorithm s estimates of n into an indicator to compute an indicator with a dynamic lookback period as set by Ehlers s autocorrelation periodogram. And here is the function applied with an SMA, to tune between 120 and 252 days. As seen, this algorithm is less consistent than I would like, at least when it comes to using a simple moving average. For now, I m going to leave this code here, and let people experiment with it I hope that someone will find that this indicator is helpful to them. Thanks for reading. NOTES I am always interested in networking meet-ups in the northeast Philadelphia NYC Furthermore, if you believe your firm will benefit from my skills, please do not hesitate to reach out to me My linkedin profile can be found here. Lastly, I am volunteering to curate the R section for books on quantocracy If you have a book about R that can apply to finance, be sure to let me know about it, so that I can review it and possibly recommend it Thakn you. This post will be about attempting to use the Depmix pack age for online state prediction While the depmix package performs admirably when it comes to describing the states of the past, when used for one-step-ahead prediction, under the assumption that tomorrow s state will be identical to today s, the hidden markov model process found within the package does not perform to expectations. So, to start off, this post was motivated by Michael Halls-Moore, who recently posted some R code about using the depmixS4 library to use hidden markov models Generally, I am loath to create posts on topics I don t feel I have an absolutely front-to-back understanding of, but I m doing this in the hope of learning from others on how to appropriately do online state-space prediction, or regime switching detection, as it may be called in more financial parlance. While I ve seen the usual theory of hidden markov models that is, it can rain or it can be sunny, but you can only infer the weather judging by the clothes you see people wearing outside your window when you wake up , and have worked with toy examples in MOOCs Udacity s self-driving car course deals with them, if I recall correctly or maybe it was the AI course , at the end of the day, theory is only as good as how well an implementation can work on real data. For this experiment, I decided to take SPY data since inception, and do a full in-sample backtest on the data That is, given that the HMM algorithm from depmix sees the whole history of returns, with this god s eye view of the data, does the algorithm correctly classify the regimes, if the backtest results are any indication. Here s the code to do so, inspired by Dr Halls-Moore s. Essentially, while I did select three states, I noted that anything with an intercept above zero is a bull state, and below zero is a bear state, so essentially, it reduces to two states. With the result. So, not particularly terrible The algorithm works, kind of, sort of, right. Well, let s try online prediction now. So what I did here was I took an expanding window, starting from 500 days since SPY s inception, and kept increasing it, by one day at a time My prediction, was, trivially enough, the most recent day, using a 1 for a bull state, and a -1 for a bear state I ran this process in parallel on a linux cluster, because windows s doParallel library seems to not even know that certain packages are loaded, and it s more messy , and the first big issue is that this process took about three hours on seven cores for about 23 years of data Not exactly encouraging, but computing time isn t expensive these days. So let s see if this process actually works. First, let s test if the algorithm does what it s actually supposed to do and use one day of look-ahead bias that is, the algorithm tells us the state at the end of the day how correct is it even for that day. With the result. So, allegedly, the algorithm seems to do what it was designed to do, which is to classify a state for a given data set Now, the most pertinent question how well do these predictions do even one day ahead You d think that state space predictions would be parsimonious from day to day, given the long history, correct. With the result. That is, without the lookahead bias, the state space prediction algorithm is atrocious Why is that. Well, here s the plot of the states. In short, the online hmm algorithm in the depmix package seems to change its mind very easily, with obvious negative implications for actual trading strategies. So, that wraps it up for this post Essentially, the main message here is this there s a vast difference between loading doing descriptive analysis AKA where have you been, why did things happen vs predictive analysis that is, if I correctly predict the future, I get a positive payoff In my opinion, while descriptive statistics have their purpose in terms of explaining why a strategy may have performed how it did, ultimately, we re always looking for better prediction tools In this case, depmix, at least in this out-of-the-box demonstrati on does not seem to be the tool for that. If anyone has had success with using depmix or other regime-switching algorithm in R for prediction, I would love to see work that details the procedure taken, as it s an area I m looking to expand my toolbox into, but don t have any particular good leads Essentially, I d like to think of this post as me describing my own experiences with the package. Thanks for reading. NOTE On Oct 5th, I will be in New York City On Oct 6th, I will be presenting at The Trading Show on the Programming Wars panel. NOTE My current analytics contract is up for review at the end of the year, so I am officially looking for other offers as well If you have a full-time role which may benefit from the skills you see on my blog, please get in touch with me My linkedin profile can be found here. This post will demonstrate how to take into account turnover when dealing with returns-based data using PerformanceAnalytics and the function in R It will demonstrate this on a basic strategy on the nine sector SPDRs. So, first off, this is in response to a question posed by one Robert Wages on the R-SIG-Finance mailing list While there are many individuals out there with a plethora of questions many of which can be found to be demonstrated on this blog already , occasionally, there will be an industry veteran, a PhD statistics student from Stanford, or other very intelligent individual that will ask a question on a topic that I haven t yet touched on this blog, which will prompt a post to demonstrate another technical aspect found in R This is one of those times. So, this demonstration will be about computing turnover in returns space using the PerformanceAnalytics package Simply, outside of the PortfolioAnalytics package, PerformanceAnalytics with its function is the go-to R package for portfolio management simulations, as it can take a set of weights, a set of returns, and generate a set of portfolio returns for analysis with the rest of PerformanceAnalytics s fun ctions. Again, the strategy is this take the 9 three-letter sector SPDRs since there are four-letter ETFs now , and at the end of every month, if the adjusted price is above its 200-day moving average, invest into it Normalize across all invested sectors that is, 1 9th if invested into all 9, 100 into 1 if only 1 invested into, 100 cash, denoted with a zero return vector, if no sectors are invested into It s a simple, toy strategy, as the strategy isn t the point of the demonstration. Here s the basic setup code. So, get the SPDRs, put them together, compute their returns, generate the signal, and create the zero vector, since treats weights less than 1 as a withdrawal, and weights above 1 as the addition of more capital big FYI here. Now, here s how to compute turnover. So, the trick is this when you call use the verbose TRUE option This creates several objects, among them returns, and These stand for Beginning Of Period Weight, and End Of Period Weight. The way that turnover is computed is simply the difference between how the day s return moves the allocated portfolio from its previous ending point to where that portfolio actually stands at the beginning of next period That is, the end of period weight is the beginning of period drift after taking into account the day s drift return for that asset The new beginning of period weight is the end of period weight plus any transacting that would have been done Thus, in order to find the actual transactions or turnover , one subtracts the previous end of period weight from the beginning of period weight. This is what such transactions look like for this strategy. Something we can do with such data is take a one-year rolling turnover, accomplished with the following code. It looks like this. This essentially means that one year s worth of two-way turnover that is, if selling an entirely invested portfolio is 100 turnover, and buying an entirely new set of assets is another 100 , then two-way turnover is 200 is around 800 at maxim um That may be pretty high for some people. Now, here s the application when you penalize transaction costs at 20 basis points per percentage point traded that is, it costs 20 cents to transact 100.So, at 20 basis points on transaction costs, that takes about one percent in returns per year out of this admittedly, terrible strategy This is far from negligible. So, that is how you actually compute turnover and transaction costs In this case, the transaction cost model was very simple However, given that returns transactions at the individual asset level, one could get as complex as they would like with modeling the transaction costs. Thanks for reading. NOTE I will be giving a lightning talk at R Finance, so for those attending, you ll be able to find me there. This post will outline an easy-to-make mistake in writing vectorized backtests namely in using a signal obtained at the end of a period to enter or exit a position in that same period The difference in results one obtains is massive. T oday, I saw two separate posts from Alpha Architect and Mike Harris both referencing a paper by Valeriy Zakamulin on the fact that some previous trend-following research by Glabadanidis was done with shoddy results, and that Glabadanidis s results were only reproducible through instituting lookahead bias. The following code shows how to reproduce this lookahead bias. First, the setup of a basic moving average strategy on the S P 500 index from as far back as Yahoo data will provide. And here is how to institute the lookahead bias. These are the results. Of course, this equity curve is of no use, so here s one in log scale. As can be seen, lookahead bias makes a massive difference. Here are the numerical results. Again, absolutely ridiculous. Note that when using the function in PerformanceAnalytics , that package will automatically give you the next period s return, instead of the current one, for your weights However, for those writing simple backtests that can be quickly done using vectorized operations, an off-by-one error can make all the difference between a backtest in the realm of reasonable, and pure nonsense However, should one wish to test for said nonsense when faced with impossible-to-replicate results, the mechanics demonstrated above are the way to do it. Now, onto other news I d like to thank Gerald M for staying on top of one of the Logical Invest strategies namely, their simple global market rotation strategy outlined in an article from an earlier blog post. Up until March 2015 the date of the blog post , the strategy had performed well However, after said date. It has been a complete disaster, which, in hindsight, was evident when I passed it through the hypothesis-driven development framework process I wrote about earlier. So, while there has been a great deal written about not simply throwing away a strategy because of short-term underperformance, and that anomalies such as momentum and value exist because of career risk due to said short-term underperformanc e, it s never a good thing when a strategy creates historically large losses, particularly after being published in such a humble corner of the quantitative financial world. In any case, this was a post demonstrating some mechanics, and an update on a strategy I blogged about not too long ago. Thanks for reading. NOTE I am always interested in hearing about new opportunities which may benefit from my expertise, and am always happy to network You can find my LinkedIn profile here. Happy new year This post will be a quick one covering the relationship between the simple moving average and time series momentum The implication is that one can potentially derive better time series momentum indicators than the classical one applied in so many papers. Okay, so the main idea for this post is quite simple. I m sure we re all familiar with classical momentum That is, the price now compared to the price however long ago 3 months, 10 months, 12 months, etc E G P now P 10 And I m sure everyone is familia r with the simple moving average indicator, as well E G SMA 10.Well, as it turns out, these two quantities are actually related. It turns out, if instead of expressing momentum as the difference of two numbers, it is expressed as the sum of returns, it can be written for a 10 month momentum as. MOM10 return of this month return of last month return of 2 months ago return of 9 months ago, for a total of 10 months in our little example. This can be written as MOM10 P 0 P 1 P 1 P 2 P 9 P 10 Each difference within parentheses denotes one month s worth of returns. Which can then be rewritten by associative arithmetic as P 0 P 1 P 9 P 1 P 2 P 10.In other words, momentum aka the difference between two prices, can be rewritten as the difference between two cumulative sums of prices And what is a simple moving average Simply a cumulative sum of prices divided by however many prices summed over. Here s some R code to demonstrate. With the resulting number of times these two signals are equal. In short, every time. Now, what exactly is the punchline of this little example Here s the punchline. The simple moving average is fairly simplistic as far as filters go It works as a pedagogical example, but it has some well known weaknesses regarding lag, windowing effects, and so on. Here s a toy example how one can get a different momentum signal by changing the filter. With the following results. While the difference of EMA10 strategy didn t do better than the difference of SMA10 aka standard 10-month momentum , that s not the point The point is that the momentum signal is derived from a simple moving average filter, and that by using a different filter, one can still use a momentum type of strategy. Or, put differently, the main general takeaway here is that momentum is the slope of a filter, and one can compute momentum in an infinite number of ways depending on the filter used, and can come up with a myriad of different momentum strategies. Thanks for reading. NOTE I am currently contracting in Chicago, and am always open to networking Contact me at my email at or find me on my LinkedIn here. This post will outline a first failed attempt at applying the ensemble filter methodology to try and come up with a weighting process on SPY that should theoretically be a gradual process to shift from conviction between a bull market, a bear market, and anywhere in between This is a follow-up post to this blog post. So, my thinking went like this in a bull market, as one transitions from responsiveness to smoothness, responsive filters should be higher than smooth filters, and vice versa, as there s generally a trade-off between the two In fact, in my particular formulation, the quantity of the square root of the EMA of squared returns punishes any deviation from a flat line altogether although inspired by Basel s measure of volatility, which is the square root of the 18-day EMA of squared returns , while the responsiveness quantity punishes any deviation from the time series of the real ized prices Whether these are the two best measures of smoothness and responsiveness is a topic I d certainly appreciate feedback on. In any case, an idea I had on the top of my head was that in addition to having a way of weighing multiple filters by their responsiveness deviation from price action and smoothness deviation from a flat line , that by taking the sums of the sign of the difference between one filter and its neighbor on the responsiveness to smoothness spectrum, provided enough ensemble filters say, 101, so there are 100 differences , one would obtain a way to move from full conviction of a bull market, to a bear market, to anything in between, and have this be a smooth process that doesn t have schizophrenic swings of conviction. Here s the code to do this on SPY from inception to 2003.And here s the very underwhelming result. Essentially, while I expected to see changes in conviction of maybe 20 at most, instead, my indicator of sum of sign differences did exactly as I had hoped it wouldn t, which is to be a very binary sort of mechanic My intuition was that between an obvious bull market and an obvious bear market that some differences would be positive, some negative, and that they d net each other out, and the conviction would be zero Furthermore, that while any individual crossover is binary, all one hundred signs being either positive or negative would be a more gradual process Apparently, this was not the case To continue this train of thought later, one thing to try would be an all-pairs sign difference Certainly, I don t feel like giving up on this idea at this point, and, as usual, feedback would always be appreciated. Thanks for reading. NOTE I am currently consulting in an analytics capacity in downtown Chicago However, I am also looking for collaborators that wish to pursue interesting trading ideas If you feel my skills may be of help to you, let s talk You can email me at or find me on my LinkedIn here. This review will be about Inovance Tech s TRAIDE system It is an application geared towards letting retail investors apply proprietary machine learning algorithms to assist them in creating systematic trading strategies Currently, my one-line review is that while I hope the company founders mean well, the application is still in an early stage, and so, should be checked out by potential users venture capitalists as something with proof of potential, rather than a finished product ready for mass market While this acts as a review, it s also my thoughts as to how Inovance Tech can improve its product. A bit of background I have spoken several times to some of the company s founders, who sound like individuals at about my age level so, fellow millennials Ultimately, the selling point is this. Systematic trading is cool Machine learning is cool Therefore, applying machine learning to systematic trading is awesome And a surefire way to make profits, as Renaissance Technologies has shown. While this may sound a bit snarky, it s also , in some ways, true Machine learning has become the talk of the town, from IBM s Watson RenTec itself hired a bunch of speech recognition experts from IBM a couple of decades back , to Stanford s self-driving car invented by Sebastian Thrun, who now heads Udacity , to the Netflix prize, to god knows what Andrew Ng is doing with deep learning at Baidu Considering how well machine learning has done at much more complex tasks than create a half-decent systematic trading algorithm , it shouldn t be too much to ask this powerful field at the intersection of computer science and statistics to help the retail investor glued to watching charts generate a lot more return on his or her investments than through discretionary chart-watching and noise trading To my understanding from conversations with Inovance Tech s founders, this is explicitly their mission. However, I am not sure that Inovance s TRAIDE application actually accomplishes this mission in its current state. Here s how it works. Users select one asset at a time, and select a date range data going back to Dec 31, 2009 Assets are currently limited to highly liquid currency pairs, and can take the following settings 1 hour, 2 hour, 4 hour, 6 hour, or daily bar time frames. Users then select from a variety of indicators, ranging from technical moving averages, oscillators, volume calculations, etc Mostly an assortment of 20th century indicators, though the occasional adaptive moving average has managed to sneak in namely KAMA see my DSTrading package, and MAMA aka the Mesa Adaptive Moving Average, from John Ehlers to more esoteric ones such as some sentiment indicators Here s where things start to head south for me, however Namely, that while it s easy to add as many indicators as a user would like, there is basically no documentation on any of them, with no links to reference, etc so users will have to bear the onus of actually understanding what each and every one of the indicators they select actually does, and wheth er or not those indicators are useful The TRAIDE application makes zero effort thus far to actually get users acquainted with the purpose of these indicators, what their theoretical objective is measure conviction in a trend, detect a trend, oscillator type indicator, etc. Furthermore, regarding indicator selections, users also specify one parameter setting for each indicator per strategy E G if I had an EMA crossover, I d have to create a new strategy for a 20 100 crossover, a 21 100 crossover, rather than specifying something like this. short EMA 20-60 long EMA 80-200.Quantstrat itself has this functionality, and while I don t recall covering parameter robustness checks optimization in other words, testing multiple parameter sets whether one uses them for optimization or robustness is up to the user, not the functionality in quantstrat on this blog specifically, this information very much exists in what I deem the official quantstrat manual , found here In my opinion, the option of cov ering a range of values is mandatory so as to demonstrate that any given parameter setting is not a random fluke Outside of quantstrat, I have demonstrated this methodology in my Hypothesis Driven Development posts, and in coming up for parameter selection for volatility trading. Where TRAIDE may do something interesting, however, is that after the user specifies his indicators and parameters, its proprietary machine learning algorithms WARNING COMPLETELY BLACK BOX determine for what range of values of the indicators in question generated the best results within the backtest, and assign them bullishness and bearishness scores In other words, looking backwards, these were the indicator values that did best over the course of the sample While there is definite value to exploring the relationships between indicators and future returns, I think that TRAIDE needs to do more in this area, such as reporting P-values, conviction, and so on. For instance, if you combine enough indicators, your ru le is a market order that s simply the intersection of all of the ranges of your indicators For instance, TRAIDE may tell a user that the strongest bullish signal when the difference of the moving averages is between 1 and 2, the ADX is between 20 and 25, the ATR is between 0 5 and 1, and so on Each setting the user selects further narrows down the number of trades the simulation makes In my opinion, there are more ways to explore the interplay of indicators than simply one giant AND statement, such as an OR statement, of some sort E G select all values, put on a trade when 3 out of 5 indicators fall into the selected bullish range in order to place more trades While it may be wise to filter down trades to very rare instances if trading a massive amount of instruments, such that of several thousand possible instruments, only several are trading at any given time, with TRAIDE, a user selects only one asset class currently, one currency pair at a time, so I m hoping to see TRAIDE create more functionality in terms of what constitutes a trading rule. After the user selects both a long and a short rule by simply filtering on indicator ranges that TRAIDE s machine learning algorithms have said are good , TRAIDE turns that into a backtest with a long equity curve, short equity curve, total equity curve, and trade statistics for aggregate, long, and short trades For instance, in quantstrat, one only receives aggregate trade statistics Whether long or short, all that matters to quantstrat is whether or not the trade made or lost money For sophisticated users, it s trivial enough to turn one set of rules on or off, but TRAIDE does more to hold the user s hand in that regard. Lastly, TRAIDE then generates MetaTrader4 code for a user to download. And that s the process. In my opinion, while what Inovance Tech has set out to do with TRAIDE is interesting, I wouldn t recommend it in its current state For sophisticated individuals that know how to go through a proper research process , TRAIDE is too stringent in terms of parameter settings one at a time , pre-coded indicators its target audience probably can t program too well , and asset classes again, one at a time However, for retail investors, my issue with TRAIDE is this. There is a whole assortment of undocumented indicators, which then move to black-box machine learning algorithms The result is that the user has very little understanding of what the underlying algorithms actually do, and why the logic he or she is presented with is the output While TRAIDE makes it trivially easy to generate any one given trading system, as multiple individuals have stated in slightly different ways before, writing a strategy is the easy part Doing the work to understand if that strategy actually has an edge is much harder Namely, checking its robustness, its predictive power, its sensitivity to various regimes, and so on Given TRAIDE s rather short data history 2010 onwards , and coupled with the opaqueness that the user oper ates under, my analogy would be this. It s like giving an inexperienced driver the keys to a sports car in a thick fog on a winding road Nobody disputes that a sports car is awesome However, the true burden of the work lies in making sure that the user doesn t wind up smashing into a tree. Overall, I like the TRAIDE application s mission, and I think it may have potential as something for the retail investors that don t intend to learn the ins-and-outs of coding a trading system in R despite me demonstrating many times over how to put such systems together I just think that there needs to be more work put into making sure that the results a user sees are indicative of an edge, rather than open the possibility of highly-flexible machine learning algorithms chasing ghosts in one of the noisiest and most dynamic data sets one can possibly find. My recommendations are these.1 Multiple asset classes 2 Allow parameter ranges, and cap the number of trials at any given point E G 4 indicators with ten settings each 10,000 possible trading systems blow up the servers To narrow down the number of trial runs, use techniques from experimental design to arrive at decent combinations I wish I remembered my response surface methodology techniques from my master s degree about now 3 Allow modifications of order sizing E G volatility targeting, stop losses , such as I wrote about in my hypothesis-driven development posts 4 Provide some sort of documentation for the indicators, even if it s as simple as a link to investopedia preferably a lot more 5 Far more output is necessary, especially for users who don t program Namely, to distinguish whether or not there is a legitimate edge, or if there are too few observations to reject the null hypothesis of random noise 6 Far longer data histories 2010 onwards just seems too short of a time-frame to be sure of a strategy s efficacy, at least on daily data may not be true for hourly 7 Factor in transaction costs Trading on an hourly time frame w ill mean far less P L per trade than on a daily resolution If MT4 charges a fixed ticket price, users need to know how this factors into their strategy 8 Lastly, dogfooding When I spoke last time with Inovance Tech s founders, they claimed they were using their own algorithms to create a forex strategy, which was doing well in live trading By the time more of these suggestions are implemented, it d be interesting to see if they have a track record as a fund, in addition to as a software provider. If all of these things are accounted for and automated, the product will hopefully accomplish its mission of bringing systematic trading and machine learning to more people I think TRAIDE has potential, and I m hoping that its staff will realize that potential. Thanks for reading. NOTE I am currently contracting in downtown Chicago, and am always interested in networking with professionals in the systematic trading and systematic asset management allocation spaces Find my LinkedIn here. EDIT Today in my email Dec 3, 2015 , I received a notice that Inovance was making TRAIDE completely free Perhaps they want a bunch more feedback on it. This post will demonstrate a method to create an ensemble filter based on a trade-off between smoothness and responsiveness, two properties looked for in a filter An ideal filter would both be responsive to price action so as to not hold incorrect positions, while also be smooth, so as to not incur false signals and unnecessary transaction costs. So, ever since my volatility trading strategy, using three very naive filters all SMAs completely missed a 27 month in XIV I ve decided to try and improve ways to create better indicators in trend following Now, under the realization that there can potentially be tons of complex filters in existence, I decided instead to focus on a way to create ensemble filters, by using an analogy from statistics machine learning. In static data analysis, for a regression or classification task, there is a trade-off betwe en bias and variance In a nutshell, variance is bad because of the possibility of overfitting on a few irregular observations, and bias is bad because of the possibility of underfitting legitimate data Similarly, with filtering time series, there are similar concerns, except bias is called lag, and variance can be thought of as a whipsawing indicator Essentially, an ideal indicator would move quickly with the data, while at the same time, not possess a myriad of small bumps-and-reverses along the way, which may send false signals to a trading strategy. So, here s how my simple algorithm works. The inputs to the function are the following. A The time series of the data you re trying to filter B A collection of candidate filters C A period over which to measure smoothness and responsiveness, defined as the square root of the n-day EMA 2 n 1 convention of the following a Responsiveness the squared quantity of price filter 1 b Smoothness the squared quantity of filter t filter t-1 1 aka R s f unction D A conviction factor, to which power the errors will be raised This should probably be between 5 and 3 E A vector that defines the emphasis on smoothness vs emphasis on responsiveness , which should range from 0 to 1.Here s the code. This gets SPY data, and creates two utility functions xtsApply, which is simply a column-based apply that replaces the original index that using a column-wise apply discards, and sumIsNa, which I use later for counting the numbers of NAs in a given row It also creates my candidate filters, which, to keep things simple, are just SMAs 2-250.Here s the actual code of the function, with comments in the code itself to better explain the process from a technical level for those still unfamiliar with R, look for the hashtags. The vast majority of the computational time takes place in the two xtsApply calls On 249 different simple moving averages, the process takes about 30 seconds. Here s the output, using a conviction factor of 2.And here is an example, lo oking at SPY from 2007 through 2011.In this case, I chose to go from blue to green, orange, brown, maroon, purple, and finally red for smoothness emphasis of 0, 5 , 25 , 50 , 75 , 95 , and 1, respectively. Notice that the blue line is very wiggly, while the red line sometimes barely moves, such as during the 2011 drop-off. One thing that I noticed in the course of putting this process together is something that eluded me earlier namely, that naive trend-following strategies which are either fully long or fully short based on a crossover signal can lose money quickly in sideways markets. However, theoretically, by finely varying the jumps between 0 to 100 emphasis on smoothness, whether in steps of 1 or finer, one can have a sort of continuous conviction, by simply adding up the signs of differences between various ensemble filters In an uptrend , the difference as one moves from the most responsive to most smooth filter should constantly be positive, and vice versa. In the interest of brev ity, this post doesn t even have a trading strategy attached to it However, an implied trading strategy can be to be long or short the SPY depending on the sum of signs of the differences in filters as you move from responsiveness to smoothness Of course, as the candidate filters are all SMAs, it probably wouldn t be particularly spectacular However, for those out there who use more complex filters, this may be a way to create ensembles out of various candidate filters, and create even better filters Furthermore, I hope that given enough candidate filters and an objective way of selecting them, it would be possible to reduce the chances of creating an overfit trading system However, anything with parameters can potentially be overfit, so that may be wishful thinking. All in all, this is still a new idea for me For instance, the filter to compute the error terms can probably be improved The inspiration for an EMA 20 essentially came from how Basel computes volatility if I recall, correct ly, it uses the square root of an 18 day EMA of squared returns , and the very fact that I use an EMA can itself be improved upon why an EMA instead of some other, more complex filter In fact, I m always open to how I can improve this concept and others from readers. Thanks for reading. NOTE I am currently contracting in Chicago in an analytics capacity If anyone would like to meet up, let me know You can email me at or contact me through my LinkedIn here. This post will deal with a quick, finger in the air way of seeing how well a strategy scales namely, how sensitive it is to latency between signal and execution, using a simple volatility trading strategy as an example The signal will be the VIX VXV ratio trading VXX and XIV, an idea I got from Volatility Made Simple s amazing blog particularly this post The three signals compared will be the magical thinking signal observe the close, buy the close, named from the ruleOrderProc setting in quantstrat , buy on next-day open, and buy on ne xt-day close. Let s get started. So here s the run-through In addition to the magical thinking strategy observe the close, buy that same close , I tested three other variants a variant which transacts the next open, a variant which transacts the next close, and the average of those two Effectively, I feel these three could give a sense of a strategy s performance under more realistic conditions that is, how well does the strategy perform if transacted throughout the day, assuming you re managing a sum of money too large to just plow into the market in the closing minutes and if you hope to get rich off of trading, you will have a larger sum of money than the amount you can apply magical thinking to Ideally, I d use VWAP pricing, but as that s not available for free anywhere I know of, that means that readers can t replicate it even if I had such data. In any case, here are the results. Log scale for Mr Tony Cooper and others. My reaction The execute on next day s close performance being vas tly lower than the other configurations and that deterioration occurring in the most recent years essentially means that the fills will have to come pretty quickly at the beginning of the day While the strategy seems somewhat scalable through the lens of this finger-in-the-air technique, in my opinion, if the first full day of possible execution after signal reception will tank a strategy from a 1 44 Calmar to a 92, that s a massive drop-off, after holding everything else constant In my opinion, I think this is quite a valid question to ask anyone who simply sells signals, as opposed to manages assets Namely, how sensitive are the signals to execution on the next day After all, unless those signals come at 3 55 PM, one is most likely going to be getting filled the next day. Now, while this strategy is a bit of a tomato can in terms of how good volatility trading strategies can get they can get a lot better in my opinion , I think it made for a simple little demonstration of this techniq ue Again, a huge thank you to Mr Helmuth Vollmeier for so kindly keeping up his dropbox all this time for the volatility data. Thanks for reading. NOTE I am currently contracting in a data science capacity in Chicago You can email me at or find me on my LinkedIn here I m always open to beers after work if you re in the Chicago area. NOTE 2 Today, on October 21, 2015, if you re in Chicago, there s a Chicago R Users Group conference at Jaks Tap at 6 00 PM Free pizza, networking, and R, hosted by Paul Teetor, who s a finance guy Hope to see you there. This post deals with an impossible-to-implement statistical arbitrage strategy using VXX and XIV The strategy is simple if the average daily return of VXX and XIV was positive, short both of them at the close This strategy makes two assumptions of varying dubiousness that one can observe the close and act on the close , and that one can short VXX and XIV. So, recently, I decided to play around with everyone s two favorite instruments on this blog VXX and XIV, with the idea that hey, these two instruments are diametrically opposed, so shouldn t there be a stat-arb trade here. So, in order to do a lick-finger-in-the-air visualization, I implemented Mike Harris s momersion indicator. And then I ran the spread through it. In other words, this spread is certainly mean-reverting at just about all times. And here is the code for the results from 2011 onward, from when the XIV and VXX actually started trading. Here are the equity curves. With the following statistics. In other words, the short side is absolutely amazing as a trade except for the one small fact of having it be impossible to actually execute, or at least as far as I m aware Anyhow, this was simply a for-fun post, but hopefully it served some purpose. Thanks for reading. NOTE I am currently contracting and am looking to network in the Chicago area You can find my LinkedIn here. Post navigation. Financial Mathematics and Modeling II FINC 621 is a graduate level class that is current ly offered at Loyola University in Chicago during the winter quarter FINC 621 explores topics in quantitative finance, mathematics and programming The class is practical in nature and is comprised of both a lecture and a lab component The labs utilize the R programming language and students are required to submit their individual assignments at the end of each class The end goal of FINC 621 is to provide students with practical tools that they can use to create, model and analyze simple trading strategies. Some useful R links. About the Instructor. Harry G is a senior quantitative trader for an HFT trading firm in Chicago He holds a master s degree in Electrical Engineering and a master s degree in Financial Mathematics from the University of Chicago In his spare time, Harry teaches a graduate level course in Quantitative Finance at the Loyola University in Chicago He is also the author of Quantitative Trading with R. The folks at Rstudio have done some amazing work with the shiny package From the shiny homepage, Shiny makes it super simple for R users like you to turn analyses into interactive web applications that anyone can use Developing web applications has always appealed to me, but hosting, learning javascript, html, etc made me put this pretty low on my priority list With shiny, one can write web applications in R. This example uses the managers dataset with calls to and from the PerformanceAnalytics package to display a plot and table in the shiny application. Below is a screenshot of the application. You need to have shiny and Performance Analytics packages installed to run the application Once those are installed, open your R prompt and run. There is a great shiny tutorial from Rstudio as well as examples from SystematicInvestor for those interested in learning more. The past few posts on momentum with R focused on a relatively simple way to backtest momentum strategies In part 4, I use the quantstrat framework to backtest a momentum strategy Using quantstrat open s the door to several features and options as well as an order book to check the trades at the completion of the backtest. I introduce a few new functions that are used to prep the data and compute the ranks I won t go through them in detail, these functions are available in my github repo in the rank-functions folder. This first chunk of code just loads the necessary libraries, data, and applies the ave3ROC function to rank the assets based on averaging the 2, 4, and 6 month returns Note that you will need to load the functions in Rank R and monthly-fun R. The next chunk of code is a critical step in preparing the data to be used in quantstrat With the ranks computed, the next step is to bind the ranks to the actual market data to be used with quantstrat It is also important to change the column names to e g because that will be used as the trade signal column when quantstrat is used. Now the backtest can be run The function qstratRank is just a convenience function that hides the quantst rat implementation for my Rank strategy. For this first backtest, I am trading the top 2 assets with a position size of 1000 units. Changing the argument to gives the flexibility of scaling in a trade In this example, say asset ABC is ranked 1 in the first month I buy 500 units In month 2, asset ABC is still ranked 1 I buy another 500 units. In the previous post, I demonstrated simple backtests for trading a number of assets ranked based on their 3, 6, 9, or 12 i e lookback periods month simple returns While it was not an exhaustive backtest, the results showed that when trading the top 8 ranked assets, the ranking based 3, 6, 9, and 12 month returns resulted in similar performance. If the results were similar for the different lookback periods, which lookback period should I choose for my strategy My answer is to include multiple lookback periods in the ranking method. This can be accomplished by taking the average of the 6, 9, and 12 month returns, or any other n-month returns This gives us the benefit of diversifying across multiple lookback periods If I believe that the lookback period of 9 month returns is better than that of the 6 and 12 month, I can use a weighted average to give the 9 month return a higher weight so that it has more influence on determining the rank This can be implemented easily with what I am calling the WeightAve3ROC function shown below. The function is pretty self explanatory, but feel free to ask if you have any questions. Now to the test results The graph below shows the results from using 6, 9, and 12 month returns as well as an average of 6, 9, and 12 month returns and weighted average of 6, 9, and 12 month returns. Case 1 simple momentum test based on 6 month ROC to rank. Case 2 simple momentum test based on 9 month ROC to rank. Case 3 simple momentum test based on 12 month ROC to rank. Case 4 simple momentum test based on average of 6, 9, and 12 month ROC to rank. Case 5 simple momentum test based on weighted average of 6, 9, and 12 month ROC to rank Weights are 1 6, 2 3, 1 6 for 6, 9, and 12 month returns. Here is a table of the returns and maximum drawdowns for the test. This test demonstrates how it may be possible to achieve better risk adjusted returns higher CAGR and lower drawdowns in this case by considering multiple lookback periods in the ranking method. Full R code is below I have included all the functions in the R script below to make it easy for you to reproduce the tests and try things out, but I would recommend putting the functions in a separate file and using source to load the functions to keep the code cleaner. Many of the sites I linked to in the previous post have articles or papers on momentum investing that investigate the typical ranking factors 3, 6, 9, and 12 month returns Most not all of the articles seek to find which is the best look-back period to rank the assets Say that the outcome of the article is that the 6 month look-back has the highest returns A trading a strategy that just uses a 6 month look-back period to rank the assets leaves me vulnerable to over-fitting based on the backtest results The backtest tells us nothing more than which strategy performed the best in the past, it tells us nothing about the future duh. Whenever I review the results from backtests, I always ask myself a lot of what if questions Here are 3 what if questions that I would ask for this backtest are. What if the strategy based on a 6 month look-back under performs and the 9 month or 3 month starts to over perform. What if the strategies based on 3, 6, and 9 month look-back periods have about the same return and risk profile, which strategy should I trade. What if the assets with high volatility are dominating the rankings and hence driving the returns. The backtests shown are simple backtests meant to demonstrate the variability in returns based on look-back periods and number of assets traded. The graphs below show the performance of a momentum strategy using 3, 6, 9, and 12 month returns and tradin g the Top 1, 4, and 8 ranked assets You will notice that there is significant volatility and variability in returns only trading 1 asset The variability between look-back periods is reduced, but there is still no one clear best look-back period There are periods of under performance and over performance for all look back periods in the test. Here is the R code used for the backtests and the plots Leave a comment if you have any questions about the code below. Time really flies it is hard to believe that it has been over a month since my last post Work and life in general have consumed much of my time lately and left little time for research and blog posts Anyway, on to the post. This post will be the first in a series of to cover a momentum strategy using R. One of my favorite strategies is a momentum or relative strength strategy Here are just a few of the reasons why I like momentum. Simple to implement. Long only or long short portfolios. Many ways to define the strength or momentum measur e. It just works. Also, a momentum strategy lends itself well to potential for diversification The universe of instruments can be infinite, but the instruments traded are finite Think about it this way Investor A looks at 10 instruments and invests 1000 in the top 5 instruments ranked by momentum Investor B looks at 100 instruments and invests 1000 in the top 5 instruments ranked by momentum Investor A is limiting his potential for diversification by only having a universe of 10 instruments Investor B has a much larger universe of instruments and can in theory be more diversified Theoretically speaking, you can trade an infinite number of instruments with a finite amount of trading capital using a momentum or relative strength strategy. Check out these links for further reading. In this first post of the series on momentum, I will go over some of the basic setup and functions we will be using. The first step is to get data from yahoo. Note that the for loop converts the data to monthly and s ubsets the data so that the only column we keep is the adjusted close column We now have four objects XLY, XLP, XLE, XLF that have the Adjusted Close price. The next step is to merge these four objects into a single object holding the Adjusted Close price We can do this in a simple one-liner in R. For the factor that will be ranked, I will use the 3 period rate of change ROC.

No comments:

Post a Comment