Matriks moving-average-models-for-volatility-and-correlation-and-covariance-matrices

Matriks moving-average-models-for-volatility-and-correlation-and-covariance-matrices

Online-trading-platform-malaysia
Strategi perdagangan berjangka minyak
Options-trading-buying-puts


Tk-trading-forex Uk-forex-perpajakan Moving-average-stock-price-chart Nilai-strategi trading Youtube-easy-forex-strategy Pilihan-trading-charles-cottle-butterfly-adjustment

Model Moving Average untuk Volatilitas dan Korelasi, dan Matriks Covariance Kutipan Citations 5 Referensi Referensi 4 quotSebagai model kovariansi waktu yang berkembang dapat diterapkan pada banyak rangkaian waktu multivariat, termasuk analisis volatilitas di bidang keuangan 4 dan aktivitas EEG di neurologi 5. Pendekatan populer untuk memperkirakan dengan lancar Matriks kovarians yang bervariasi mencakup model rata-rata pergerakan tertimbang eksponensial (25 EWMA) 25 dan model heteroskedastisitas bersyarat multivariasional bersyarat (GARCH) 26. Yang pertama menangkap tren yang bervariasi dengan mulus, namun gagal menangani data yang hilang, dan memerlukan deret panjang untuk mencapai akurasi estimasi yang tinggi. ABSTRAK: Dimensiitas reduksi dalam analisis deret waktu multivariat memiliki aplikasi yang luas, mulai dari analisis data keuangan hingga penelitian biomedis. Namun, tingkat kebisingan ambien dan berbagai gangguan yang tinggi menghasilkan sinyal nonstasioner, yang dapat menyebabkan kinerja metode konvensional yang tidak efisien. Dalam makalah ini, kami mengusulkan kerangka reduksi dimensi nonlinier dengan menggunakan peta difusi pada manifold statistik yang dipelajari, yang memunculkan konstruksi representasi dimensi rendah dari deret waktu nonstasioner dimensi tinggi. Kami menunjukkan bahwa peta difusi, dengan biji afinitas berdasarkan divergensi Kullback-Leibler antara statistik sampel lokal, memungkinkan perkiraan efisien jarak geodesi berpasangan. Untuk membangun manifold statistik, kami memperkirakan distribusi parametrik yang berkembang dengan merancang sebuah keluarga model generik Bayesian. Kerangka yang diusulkan dapat diterapkan pada masalah di mana distribusi yang berkembang dengan waktu (data temporal lokal), dan bukan sampelnya sendiri, didorong oleh proses dasar berdimensi rendah. Kami memberikan estimasi parameter dan metodologi pengurangan dimensi yang efisien, dan menerapkannya pada dua aplikasi: analisis musik dan prediksi kejang-epilepsi. Full-text Article Apr 2015 untuk menghitung korelasi EWMA, kovarian dibagi dengan akar kuadrat produk dari dua varians EWMA (Alexander, 2008). Abstraksi: Makalah ini menganalisis apakah pasar saham di Eropa Tenggara (SEE) telah semakin terintegrasi dengan pasar saham regional dan global selama tahun 2000an. Dengan menggunakan berbagai metodologi ko-integrasi, kami menunjukkan bahwa SEE pasar saham tidak memiliki hubungan jangka panjang dengan mitra mereka yang matang. Ini berarti bahwa SEE markets mungkin diimunisasi ke guncangan eksternal. Kami juga memodelkan berbagai korelasi di antara pasar ini dengan menggunakan model Multilayer Generalized Autoregressive Conditional Heteroschedastic (MGARCH) serta metodologi Exponential Weighted Moving Average (EWMA). Hasil penelitian menunjukkan bahwa korelasi pasar ekuitas Inggris dan AS dengan pasar Eropa Tenggara berubah seiring waktu. Perubahan dalam korelasi antara pasar patokan dan pasangan pasar SEE individual tidak seragam meskipun bukti adanya peningkatan konvergensi di antara Eropa Tenggara dan pasar saham yang dikembangkan terbukti. Juga diteliti dalam makalah ini apakah struktur korelasi antara imbal hasil indeks di pasar yang berbeda berubah dalam fase yang berbeda dari krisis keuangan global 2007-2009. Secara keseluruhan, hasil kami menunjukkan bahwa manfaat diversifikasi masih dimungkinkan bagi investor yang ingin melakukan diversifikasi portofolio antara pasar saham SEE yang dikembangkan dan yang baru muncul. Full-text Article Feb 2013 Francesco Guidi Mehmet Ugur Tampilkan abstrak abstrak Abstraksi: Analis pendapatan tetap terbiasa memantau beberapa hasil benchmark secara terus menerus dan memberikan perkiraan titik untuk hasil ini, atau untuk kombinasi keduanya. Namun, optimalisasi portofolio pendapatan tetap memerlukan perkiraan yang akurat tidak hanya beberapa hasil benchmark, namun kurva imbal hasil lengkap. Bab ini menghasilkan perkiraan satu atau beberapa kurva imbal hasil yang konsisten dengan pandangan analis. Model ini didasarkan pada aplikasi baru analisis komponen utama (PCA). Hal ini dapat diperluas ke pasar lain dan tidak memiliki batasan jumlah variabel perkiraan, atau jumlah penayangan. Kami mempertimbangkan contoh peramalan kurva imbal hasil obligasi pemerintah AS, zona euro dan Inggris, secara simultan atau tidak. Bab Jan 2010 Jurnal Elektronika SSRN Leonardo M. NogueiraModel Rata-rata untuk Volatilitas dan Korelasi, dan Matriks Kovarian Transkripsi 1 JWPR0-Fabozzi cc cc November, 00. Au: istilah tidak muncul dalam teks 0 BAB CC Model Bergerak Rata-rata untuk Volatilitas dan Korelasi , Dan Matriks Covariance CAROL ALEXANDER, PhD Ketua Manajemen Risiko dan Direktur Riset, Pusat ICMA, Sekolah Bisnis, Universitas Membaca Sifat Dasar Matriks Kovarian dan Korelasi Rata-rata Tertimbang Rata-rata Metodologi Statistik Keyakinan Interval untuk Kesalahan Varians dan Volatilitas Standar untuk Bobot yang Sama Rata-rata Penaksir Matriks Kovarian Rata-rata Berarti Rata-Rata Tertimbang Studi Kasus: Mengukur Volatilitas dan Korelasi Keputusan Treasuri AS. Berapa Lama Periode Data Historis yang Harus Digunakan Kesalahan Metode Pergerakan Rata-rata Berimbang Rata-Rata Menggunakan Rata-rata Bergerak Berimbang Rata-Rata 0 Rata-rata Berukuran Rata-rata Tertimbang Penganalisis Metodologi Statistik Interpretasi dari lambda Sifat Perkiraan Model Peramalan EWMA Standar Kesalahan untuk Prakiraan EWMA Ringkasan Metodologi RiskMetrics TM Referensi Abstrak: Fluktuasi dan korelasi pengembalian pada satu set aset, faktor risiko atau tingkat suku bunga dirangkum dalam matriks kovarians. Matriks ini terletak pada inti analisis risiko dan pengembalian. Ini berisi semua informasi yang diperlukan untuk memperkirakan volatilitas portofolio, untuk mensimulasikan nilai yang berkorelasi untuk faktor risikonya, untuk melakukan diversifikasi investasi dan untuk mendapatkan portofolio efisien yang memiliki tingkat trade-off antara risiko dan return yang optimal. Kedua manajer risiko dan manajer aset memerlukan matriks kovariansi yang mungkin mencakup banyak aset atau faktor risiko. Misalnya, dalam sistem manajemen risiko global dari sebuah bank internasional besar semua kurva imbal hasil utama, indeks ekuitas, nilai tukar mata uang asing dan harga komoditas akan tercakup dalam satu matriks kovariansi dimensi yang sangat besar. Kata kunci: volatilitas, korelasi, kovariansi, matriks, rata-rata pergerakan tertimbang rata-rata, rata-rata pergerakan rata-rata tertimbang EWMA), konstanta pemulusan, RiskMetrics, kesalahan standar perkiraan volatilitas Rasio dan kovarian adalah parameter distribusi gabungan aset atau faktor risiko) kembali. Penting untuk dipahami bahwa itu tidak dapat diobservasi. Mereka hanya bisa diperkirakan atau diperkirakan dalam konteks model. Model waktu kontinyu, yang digunakan untuk penentuan harga opsi, sering didasarkan pada proses stokastik untuk varians dan kovarians. Model waktu diskrit, yang digunakan untuk mengukur risiko portofolio, didasarkan pada model deret waktu untuk varians dan kovariansi. Dalam setiap kasus, kita hanya bisa memperkirakan atau meramalkan varians dan kovarians dalam konteks model yang diasumsikan. Harus ditekankan bahwa tidak ada varians atau kovarians sejati sejati. Apa yang benar hanya bergantung pada model statistik. Bahkan jika kita tahu dengan pasti bahwa model kita adalah representasi yang benar dari proses pembangkitan data, kita tidak akan pernah bisa mengukur varians dan parameter kovariansi sebenarnya karena varian dan kovarians murni tidak diperdagangkan di pasar. Sebuah 2 JWPR0-Fabozzi c-cc November, 00. Model Moving Average untuk Volatilitas dan Korelasi, dan Matriks Kovarian 0 pengecualian untuk ini adalah futures pada indeks volatilitas seperti Chicago Board Options Exchange Volatility Index VIX). Oleh karena itu, beberapa volatilitas risiko-netral diamati. Namun, bab ini membahas matriks kovarians dalam ukuran fisik. Memperkirakan varians sesuai dengan rumus yang diberikan oleh sebuah model, dengan menggunakan data historis, memberikan varians yang teramati yang direalisasikan oleh proses yang diasumsikan dalam model kami. Tapi varians yang terealisasi ini masih hanya perkiraan. Perkiraan sampel selalu dikenai kesalahan sampling, yang berarti nilainya bergantung pada data sampel yang digunakan. Singkatnya, model statistik yang berbeda dapat memberikan perkiraan varian dan kovariansi yang berbeda untuk dua alasan: Perbedaan atau kovariansi yang benar) berbeda antara model. Akibatnya, ada sejumlah besar risiko model yang melekat dalam konstruksi kovariansi atau matriks korelasi. Artinya, hasil yang sangat berbeda dapat diperoleh dengan menggunakan dua model statistik yang berbeda walaupun berdasarkan data yang sama persis. Perkiraan varians dan kovarians benar) dikenai kesalahan sampling. Artinya, bahkan ketika kita menggunakan model yang sama untuk memperkirakan varians, perkiraan kami akan berbeda tergantung pada data yang digunakan. Keduanya mengubah periode sampel dan mengubah frekuensi pengamatan akan mempengaruhi perkiraan matriks kovarians. Bab ini membahas model rangkaian waktu diskrit rata-rata bergerak untuk varians dan kovarians, dengan fokus pada 0 implementasi praktis dari pendekatan ini dan memberikan penjelasan untuk keuntungan dan keterbatasannya. Alat statistik lainnya dijelaskan di Alexander 00, Bab. SIFAT DASAR KECELAKAAN DAN 0 MATERI KORELASI Matriks kovarian adalah matrik simetris varian dan kovariansi dari satu set pengembalian kembali aset, atau pada faktor risiko, yang diberikan oleh: sigma sigma sigma sigma sigma sigma m sigma sigma sigma . Sigma m CC.) Sigma m sigmam Karena sigma sigma sigma m sigma sigma sigma m sigma sigma sigma. Sigma m sigma m sigmam sigma sigma sigma sigma sigma sigma sigma sigma sigma sigma sigma sigma sigma sigma sigma sigma. Sigma sigma m m sigma m sigma sigmam matriks kovarians juga dapat dinyatakan sebagai V DCD CC.) Di mana D adalah matriks diagonal dengan elemen yang sama dengan deviasi standar dari return andc adalah matriks korelasi pengembalian. Yaitu: sigma sigma. Sigma m sigma sigma sigma Sigma m 0 sigma sigma m sigma m. Sigmam 0. 0 sigma n. N sigma N 0 sigma n n. 0. 0 sigma n Oleh karena itu, matriks kovarians hanyalah cara yang tepat secara matematis untuk mengungkapkan volatilitas aset dan korelasi mereka. Untuk mengilustrasikan bagaimana memperkirakan matriks kovarian tahunan dan matriks kovarians 0 hari, asumsikan tiga aset yang memiliki volatilitas dan korelasi berikut: Volatilitas aset 0 Asset Asset correlation 0. Asset volatility 0 Asset Asset correlation 0. Asset volatility Asset Asset correlation 0 Jadi, DC Jadi matriks kovariansi tahunan DCD adalah: Untuk menemukan matriks kovarians 0 hari dalam kasus sederhana ini, seseorang dipaksa untuk mengasumsikan bahwa pengembalian tersebut independen dan didistribusikan secara identik untuk menggunakan akar kuadrat dari aturan waktu: yaitu , Bahwa matriks kovarian h-hari adalah h kali matriks kovarians hari. Dengan kata lain, matriks kovarians 0-hari diperoleh dari matriks tahunan dengan membagi setiap elemen, dengan asumsi ada hari perdagangan per tahun. Sebagai alternatif, kita dapat memperoleh matriks 0-hari menggunakan volatilitas 0-hari di D. Perhatikan bahwa berdasarkan asumsi pengembalian independen dan identik, C tidak boleh terpengaruh oleh periode holding. Artinya, D C 3 JWPR0-Fabozzi c-cc November, 00. SILAKAN SUPPLY PART TITLE karena setiap volatilitas dibagi dengan maksud kuadrat). Kemudian kita mendapatkan hasil yang sama seperti di atas, yaitu Perhatikan bahwa V adalah semidefinite positif jika dan hanya jika C adalah semidefinite positif. D selalu positif pasti. Oleh karena itu, semidefiniteness positif dari V hanya bergantung pada cara kita membangun matriks korelasi. Ini adalah tantangan untuk menghasilkan matriks korelasi semidefinite positif yang bermakna, yang cukup besar bagi para manajer untuk dapat menjaring risiko di semua posisi di sebuah perusahaan. Asumsi penyederhanaan sangat diperlukan. Misalnya RiskMetrics) menggunakan metodologi yang sangat sederhana berdasarkan rata-rata bergerak untuk memperkirakan matriks definitif positif yang sangat besar yang mencakup ratusan faktor risiko untuk pasar keuangan global. Ini dibahas lebih lanjut di bawah ini.) PENILAIAN BERAT YANG BAIK Bagian ini menjelaskan bagaimana volatilitas dan korelasi diperkirakan dan diperkirakan dengan menerapkan bobot yang sama pada data rangkaian waktu historis tertentu. Kami menguraikan sejumlah perangkap dan keterbatasan pendekatan ini dan sebagai hasilnya, rekomendasikan agar model ini digunakan sebagai indikasi rentang kemungkinan untuk volatilitas dan korelasi jangka panjang. Seperti yang akan kita lihat, model ini memiliki validitas yang meragukan untuk volatilitas jangka pendek dan peramalan korelasi. Berikut ini, untuk kesederhanaan, kita asumsikan bahwa pengembalian rata-rata adalah nol dan pengembalian tersebut diukur pada frekuensi harian, kecuali jika dinyatakan lain. Return mean nol adalah asumsi standar untuk penilaian risiko berdasarkan deret data harian, namun jika pengembalian diukur dalam interval yang lebih lama, hal itu mungkin tidak terlalu realistis. Maka estimasi bobot varians yang sama tertimbang adalah rata-rata kuadrat dan estimasi volatilitas yang sesuai adalah akar kuadrat dari persentase ini yang dinyatakan sebagai persentase tahunan. Estimasi kovarians dua tingkat yang sama rata-rata adalah rata-rata hasil silang hasil pengembalian dan estimasi korelasi yang sama tertimbangnya adalah rasio kovariansi terhadap akar kuadrat produk dari kedua varians tersebut. Pembandingan data historis yang sama adalah metode statistik pertama yang diterima secara luas untuk meramalkan volatilitas dan korelasi pengembalian aset keuangan. Selama bertahun-tahun, itu adalah standar pasar untuk meramalkan volatilitas rata-rata selama hari berikutnya dengan mengambil rata-rata tertimbang kuadrat rata-rata selama h hari sebelumnya. Metode ini disebut ramalan volatilitas historis. Saat ini, banyak teknik peramalan statistik yang berbeda dapat diterapkan pada data deret waktu historis sehingga membingungkan untuk menyebut metode metode rata-rata yang tertimbang ini. Namun, istilah yang agak membingungkan ini tetap standar. Perceived changes in volatility and correlation memiliki konsekuensi penting untuk semua jenis keputusan manajemen risiko, apakah berkaitan dengan kapitalisasi, alokasi sumber daya atau strategi lindung nilai. Memang inilah parameter distribusi pengembalian yang merupakan blok bangunan fundamental dari model penilaian risiko pasar. Oleh karena itu penting untuk memahami jenis variabilitas dalam pengembalian yang diukur model. Model mengasumsikan bahwa proses yang terdistribusi secara independen dan identik menghasilkan pengembalian. Artinya, volatilitas dan korelasi keduanya konstan dan akar kuadrat dari aturan waktu berlaku. Asumsi ini memiliki konsekuensi penting dan kami harus menjelaskannya dengan sangat hati-hati. Metodologi Statistik Metodologi untuk menyusun matriks kovariansi berdasarkan rata-rata tertimbang rata-rata dapat digambarkan dengan istilah yang sangat sederhana. Pertimbangkan serangkaian deret waktu i. M t. T. Disini subscript saya menunjukkan aset atau faktor risiko, dan t menunjukkan waktu di mana setiap return diukur. Kita akan mengasumsikan bahwa setiap return memiliki mean nol. Kemudian estimasi tak bias varians tanpa syarat dari variabel return ith pada waktu t, berdasarkan pada T terakhir return harian sebagai: circsigma i, t T ri, tll T CC.) Estimasi tidak bias berarti nilai yang diharapkan dari estimator Sama dengan nilai sebenarnya Perhatikan bahwa CC.) Memberikan perkiraan varians yang tidak bias tapi ini tidak sama dengan kuadrat estimasi tak bias dari standar deviasi. Artinya, E circsigma) sigma tapi E circsigma) sigma. Jadi benar-benar lingkaran topi harus dituliskan di atas keseluruhan sigma. Tetapi secara umum dipahami bahwa notasi sirksigma digunakan untuk menunjukkan perkiraan atau perkiraan varians, dan bukan kuadrat perkiraan standar deviasi. Jadi, dalam kasus bahwa mean return adalah nol, kita memiliki E circsigma) sigma. Jika mean return tidak diasumsikan nol, kita perlu memperkirakan hal ini dari sampel, dan ini menempatkan sebuah linier) pada varians yang diperkirakan dari data sampel. Dalam hal ini, untuk mendapatkan perkiraan yang tidak bias, kita harus menggunakan T) ri, t l r i l si, t CC.) Di mana r i adalah tingkat pengembalian rata-rata pada deretnya, diambil alih keseluruhan sampel titik data T. Bentuk penyimpangan mean di atas mungkin berguna untuk memperkirakan varians menggunakan data bulanan atau bahkan mingguan selama periode dimana hasil rata-rata berbeda secara signifikan dari nol. Namun dengan data harian rata-rata return biasanya sangat kecil dan karena, seperti yang akan kita lihat di bawah, kesalahan yang disebabkan oleh asumsi lain sangat besar terkait dengan kesalahan yang diinduksi 4 JWPR0-Fabozzi cc cc November, 00. Model Bergerak Rata-rata untuk Volatilitas dan Korelasi, dan Matriks Kovarian dengan mengasumsikan mean adalah nol, kita biasanya menggunakan bentuk CC.). Demikian pula, perkiraan tidak bias dari kovariansi tanpa syarat dari dua nol rerata kembali pada waktu t, berdasarkan hasil akhir harian terakhir T adalah: circsigma i, j, tnri, tlrj, tll T CC.) Seperti disebutkan di atas, kita biasanya akan mengabaikan Penyesuaian deviasi rata-rata dengan data harian. Estimasi matriks kovariansi tanpa syarat yang sama tertimbang pada waktu t untuk satu set k kembali dengan demikian adalah circv t circsigma i, j, t) untuk i, j. K. Secara longgar, istilah unconditional mengacu pada fakta bahwa varian keseluruhan atau jangka panjang atau rata-rata yang kita perkirakan, berlawanan dengan varians bersyarat yang dapat berubah dari hari ke hari dan sensitif terhadap kejadian terkini. Seperti yang disebutkan dalam pendahuluan, kami menggunakan istilah volatilitas untuk mengacu pada deviasi standar tahunan. Perkiraan volatilitas dan korelasi yang sama tertimbang diperoleh dalam dua tahap. Pertama, seseorang memperoleh estimasi tak bias dari matriks kovariansi tanpa syarat dengan menggunakan rata-rata tertimbang rata-rata pengembalian kuadrat dan hasil silang pengembalian dan jumlah n poin data yang sama setiap saat. Kemudian ini diubah menjadi volatilitas dan perkiraan korelasi dengan menerapkan rumus biasa. Misalnya, jika imbal hasil diukur pada frekuensi harian dan ada hari perdagangan per tahun: Volatilitas volatilitas yang sama tertimbang t Sama dengan korelasi tertimbang circ ij, t circsigma ij, t circsigma i, t circsigma j, t CC.) Secara sama Metodologi tertimbang matriks kovariansi yang diperkirakan baru diperkirakan sebagai perkiraan saat ini, tidak ada yang lain dalam model untuk membedakan perkiraan dari perkiraan. Likuiditas risiko awal untuk matriks kovariansi diberikan oleh frekuensi pengembalian data setiap hari akan memberikan perkiraan matriks kovarians-hari, pengembalian mingguan akan memberikan perkiraan matriks kovarians-minggu dan sebagainya. Kemudian, karena model mengasumsikan bahwa pengembalian secara independen dan terdistribusi secara identik, kita dapat menggunakan akar kuadrat dari aturan waktu untuk mengubah perkiraan hari ke ramalan matriks kovarians h-hari, cukup dengan mengalikan setiap elemen matriks hari dengan h. Demikian pula, ramalan bulanan dapat diperoleh untuk perkiraan mingguan dengan mengalikan setiap elemen dengan, dan seterusnya. Setelah memperoleh perkiraan varians, volatilitas, kovarian dan korelasi, kita harus bertanya: seberapa akurat perkiraan ini Untuk ini, kita dapat memberikan interval kepercayaan, yaitu kisaran di mana kita cukup yakin bahwa parameter sebenarnya akan berbohong, atau Kesalahan standar untuk perkiraan parameter kami. Kesalahan standar memberikan ukuran presisi perkiraan dan dapat digunakan untuk menguji apakah parameter sebenarnya dapat mengambil nilai tertentu, atau berada pada kisaran tertentu. Beberapa bagian selanjutnya menunjukkan bagaimana interval kepercayaan dan kesalahan standar dapat dibangun. Interval Keyakinan untuk Varians dan Volatilitas Interval kepercayaan untuk varians sumbu sebenarnya ketika diestimasi dengan rata-rata bobot rata-rata dapat diturunkan dengan menggunakan aplikasi teori sampling langsung. Dengan mengasumsikan estimasi varians didasarkan pada n pengembalian terdistribusi normal dengan mean nol yang diasumsikan, maka sigma sirkus T akan memiliki distribusi chi-kuadrat dengan derajat kebebasan T lihat Freund). A 00 alpha) interval kepercayaan two-sided untuk t circsigma sigma karena itu akan mengambil bentuk chi alpha, t, chi alpha, t) dan perhitungan langsung memberikan interval kepercayaan terkait untuk varians sigma sebagai:) T circsigma T circsigma, CC Misalnya chi alpha, t chi alpha, t Misalnya, interval kepercayaan untuk perkiraan varians bobot yang sama berdasarkan pada 0 observasi diperoleh dengan menggunakan nilai kritis chi-squared atas dan bawah: chi 0., 0. Dan chi 0,0,0. Jadi interval kepercayaan adalah 0. circsigma. Circsigma) dan nilai pasti diperoleh dengan mengganti nilai estimasi varians. Gambar CC. Mengilustrasikan batas atas dan bawah untuk interval kepercayaan untuk perkiraan variansi bila estimasi varians bobot sama satu. Kita melihat bahwa sebagai ukuran sampel T meningkat, lebar interval kepercayaan menurun, sangat nyata sehingga T meningkat dari nilai rendah. Kita dapat beralih ke interval keyakinan yang akan berlaku untuk perkiraan volatilitas. Ingat bahwa volatilitas, yang merupakan akar kuadrat dari varians, hanyalah transformasi penurunan monotonik dari varians. Persentil invarian di bawah transformasi peningkatan monotonik yang ketat. Artinya, jika f adalah fungsi monotonik yang meningkat dari variabel acak X maka: P c lt X lt c u) P f c l) lt f X) lt f c u)) Gambar CC. 00 CC.) 00 Interval Kepercayaan untuk Prakiraan Variasi 5 JWPR0-Fabozzi c-cc November, 00. SILAKAN MEMASOK BAGIAN TITLE Property CC.) Memberikan interval kepercayaan untuk volatilitas historis berdasarkan interval kepercayaan CC.). Karena x adalah fungsi monotonik yang meningkat dari x, yang hanya mengambil akar kuadrat dari batas bawah dan atas untuk varians bobot rata. Misalnya jika interval kepercayaan untuk varians adalah, maka untuk volatilitas terkait adalah,. Dan, karena x juga monoton meningkat untuk x gt 0, sebaliknya juga berlaku. Ini jika interval kepercayaan untuk volatilitas, maka varians yang terkait adalah,. Kesalahan Standar untuk Estimator Rata-rata Tertimbang Rata-rata Penaksir parameter apapun memiliki distribusi dan estimasi titik volatilitas hanyalah perkiraan distribusi estimator volatilitas. Fungsi distribusi estimator volatilitas rata-rata tertimbang rata-rata tidak hanya akar kuadrat dari fungsi distribusi dari perkiraan varians yang sesuai. Sebagai gantinya, ini mungkin berasal dari distribusi estimator varians melalui transformasi sederhana. Karena volatilitas adalah akar kuadrat dari varians, fungsi kerapatan estimator volatilitas adalah gcircsigma) circsigma hcircsigma) untuk circsigma gt0 CC.) Dimana h circsigma) adalah fungsi densitas dari estimator varians. Ini mengikuti dari fakta bahwa jika y adalah fungsi monoton dan terdiferensialkan x maka kepadatan probabilitasnya g.) Dan h.) Berhubungan dengan gy) dxdy hx) seefreund. Perhatikan bahwa ketika y x, dxdy y dan begitu gy) y hx). Selain perkiraan atau harapan, orang mungkin juga memperkirakan deviasi standar distribusi estimator. Ini disebut kesalahan standar perkiraan. Kesalahan standar menentukan lebar interval kepercayaan untuk perkiraan dan ini menunjukkan seberapa dapat diandalkan perkiraan perkiraan. Semakin luas interval kepercayaan, semakin tidak pasti ada ramalan. Kesalahan standar untuk estimasi varians rata-rata tertimbang rata-rata didasarkan pada asumsi normalitas untuk pengembalian. Model rata-rata bergerak mengasumsikan bahwa return independen dan terdistribusi secara identik. Sekarang mengasumsikan normalitas juga, sehingga pengembaliannya secara normal dan terdistribusi secara independen, dilambangkan dengan NID0, sigma), kami menerapkan varians operator ke CC.). Perhatikan bahwa jika X i adalah variabel acak independen i. T) maka f X i) juga independen untuk fungsi yang dapat dikenali secara monotonik f. Oleh karena itu, hasil kuadrat independen, dan kita memiliki: V circsigma t) T i V rt i) T CC.0) Karena VX) EX) EX) untuk variabel acak X, Vrt) Er t) Er t). Dengan asumsi mean nol Ert) sigma dan asumsi normalitas, Ert) sigma.hence untuk setiap t: V rt) sigma sigma sigma dan mengganti ini menjadi CC.0) memberikan V circsigma t) sigma CC.) Oleh karena itu, kesalahan standar Dari perkiraan varians rata-rata bobot rata-rata berdasarkan pada nilai nol kuadrat rata-rata adalah sigma T atau hanya, bila dinyatakan sebagai persentase varians. Misalnya kesalahan standar T dari estimasi varians adalah 0 bila pengamatan digunakan dalam estimasi, dan 0 bila 00 pengamatan digunakan dalam estimasi. Bagaimana dengan kesalahan standar estimator volatilitas Untuk mendapatkan ini, pertama-tama kita buktikan bahwa untuk fungsi continuously differentiable f dan random variable X: V f X)) VX CC.) Untuk menunjukkan ini, kita mengambil kedua Urutan ekspansi Taylor tentang mean X dan kemudian ekspektasi. Lihat Alexander 00), Bab. Hal ini memberikan: E f X)) f EX)) f EX)) VX) CC.) Demikian pula, E f X)) f EX)) f EX)) f EX)) f EX))) VX) CC.) Lagi-lagi mengabaikan persyaratan tingkat tinggi. Hasilnya CC.) Mengikuti pada catatan bahwa: V f X)) E f X)) E f X)) Kita sekarang dapat menggunakan CC.) Dan CC.) Untuk mendapatkan kesalahan standar dari perkiraan volatilitas historis. Dari CC), kita memiliki sirkus V) circsigma) V circsigma) dan begitu: V circsigma) V circsigma)) CC.) Circsigma Sekarang menggunakan CC.) Di CC.) Kita memperoleh varians dari estimator volatilitas sebagai: V circsigma) ) Sigma) sigma CC.) Sigma TT sehingga kesalahan standar estimator volatilitas sebagai persentase volatilitas adalah T). Hasil ini memberi tahu kita bahwa kesalahan standar estimator volatilitas sebagai persentase volatilitas) kira-kira satu setengah ukuran kesalahan standar varians sebagai persentase varians). Jadi, sebagai persentase volatilitas, kesalahan standar estimator volatilitas historis kira-kira 0 saat pengamatan digunakan dalam perkiraan, dan bila 00 pengamatan digunakan dalam estimasi. Kesalahan standar pada estimasi volatilitas rata-rata pergerakan rata-rata tertimbang menjadi sangat besar ketika hanya beberapa pengamatan 6 JWPR0-Fabozzi c-cc November, 00. Model Bergerak Rata-rata untuk Volatilitas dan Korelasi, dan Matriks Kovarian digunakan. Inilah salah satu alasan mengapa disarankan untuk menggunakan periode rata-rata yang panjang dalam perkiraan volatilitas historis. Lebih sulit untuk menurunkan kesalahan standar dari estimasi korelasi rata-rata tertimbang rata-rata. Namun, dapat ditunjukkan bahwa V circ ij) T CC.) Jadi, kita memiliki distribusi t berikut untuk estimasi korelasi dibagi dengan kesalahan standarnya: circ ij T circ ij t T CC.) Secara khusus, signifikansi dari Perkiraan korelasi tergantung pada jumlah pengamatan yang digunakan dalam sampel. Untuk mengilustrasikan pengujian untuk kepentingan korelasi historis, anggaplah bahwa estimasi korelasi historis dari 0. diperoleh dengan menggunakan observasi. Apakah ini secara signifikan lebih besar dari nol Hipotesis nol adalah H 0. 0, hipotesis alternatifnya adalah H. gt0 dan statistik uji adalah CC.). Menghitung nilai statistik ini mengingat data kami memberikan t. . Bahkan nilai 0 atas kritis dari t-distribusi dengan derajat kebebasan lebih besar dari nilai ini sebenarnya.). Oleh karena itu kita tidak dapat menolak hipotesis nol: 0. tidak secara signifikan lebih besar dari nol bila diperkirakan dari pengamatan. Namun, jika nilai 0. yang sama telah diperoleh dari sampel dengan, katakanlah, 00 pengamatan nilai t kita pasti 0, yang secara signifikan positif pada. Tingkat karena bagian atas. Nilai kritis t-distribusi dengan derajat kebebasan adalah. Matriks Kovarian Rata-rata Tertimbang Rata-rata Tertimbang Rata-rata pergerakan tertimbang rata-rata dihitung pada jendela data ukuran tetap yang digulung sepanjang waktu, setiap hari menambahkan kembalinya yang baru dan melepaskan kembalinya yang tertua. Panjang jendela data ini, juga disebut periode lihat-belakang atau periode rata-rata, adalah interval waktu di mana kita menghitung rata-rata hasil kuadrat untuk varians) atau hasil silang rata-rata pengembalian untuk kovariansi). Di masa lalu, beberapa lembaga keuangan besar telah kehilangan banyak uang karena mereka menggunakan model rata-rata pergerakan rata-rata tertimbang secara tidak tepat. Saya tidak akan terkejut jika lebih banyak uang hilang karena penggunaan model ini yang tidak berpengalaman di masa depan. Masalahnya bukan model itu sendiri, formula statistik yang sangat terhormat untuk estimator yang tidak bias, masalah muncul dari penerapannya yang tidak tepat dalam konteks deret waktu. Kelesuan) adalah sebagai berikut: prediksi jangka panjang tidak terpengaruh oleh fenomena jangka pendek seperti pengelompokan volatilitas sehingga akan tepat untuk mengambil rata-rata selama periode sejarah yang sangat panjang. Tapi prediksi jangka pendek harus mencerminkan pasar saat ini Jan-00 Jul-00 Jan-0 Gambar CC. Jul-0 SP0 MIB0 Jan-0 Jul-0 Jan-0 Jul-0 Jan-0 Jul-0 Jan-0 Jul-0 MIB 0 dan SampP 00 Daily Close Jan ditions, yang berarti bahwa hanya pengembalian masa lalu yang segera harus digunakan . Beberapa orang menggunakan periode rata-rata historis dari hari T untuk meramalkan ke depan hari T yang lain menggunakan periode sejarah yang lebih pendek dari pada periode perkiraan. Misalnya, untuk ramalan 0 hari, beberapa praktisi mungkin melihat ke belakang 0 hari atau lebih. Tapi pendekatan yang tampaknya masuk akal ini benar-benar menimbulkan masalah besar. Jika satu atau lebih tingkat pengembalian ekstrim termasuk dalam periode rata-rata, volatilitas atau korelasi), perkiraan tiba-tiba melompat ke tingkat yang sama sekali berbeda pada hari ketika sama sekali tidak terjadi di pasar. Dan sebelum terjun secara misterius, perkiraan historis akan jauh lebih besar dari seharusnya. Gambar CC. Menggambarkan harga penutupan harian dari indeks harga MIB 0 Italia antara awal Januari 000 dan akhir April 00 dan membandingkannya dengan harga indeks SampP 00 selama periode yang sama. Harga diunduh dari Yahoo Finance. Kami akan menunjukkan bagaimana menghitung volatilitas historis 0 hari, 0 hari, dan 0 hari dari kedua indeks saham ini dan membandingkannya secara grafis. Kami membuat tiga takaran volatilitas rata-rata bergerak rata-rata tertimbang untuk Indeks MIB 0, masing-masing dengan T 0 hari, 0 hari dan 0 hari. Hasilnya ditunjukkan pada Gambar CC. Mari kita lebih dulu fokus pada bagian awal periode data dan pada periode setelah September 00), serangan teroris pada khususnya. Indeks Italia bereaksi terhadap berita jauh lebih banyak daripada kebanyakan indeks lainnya. Perkiraan volatilitas berdasarkan 0 hari data melonjak dari hampir dalam satu hari, dan kemudian terus meningkat lebih jauh, sampai. Kemudian, tiba-tiba, tepat 0 hari setelah kejadian, volatilitas 0-hari melompat turun lagi ke 0. Tapi tidak ada yang terjadi di pasar Italia pada hari itu. Penurunan drastis volatilitas hanyalah hantu serangan teroris: Tidak ada refleksi pada semua kondisi pasar riil saat itu. Fitur serupa terlihat dalam seri volatilitas 0-hari dan 0-hari. Each series jumps us immediately after the event, and then, either 0 or 0 days afterward, jump down again. On November, 00, the three different look-back periods gave volatility estimates of 0, , and , but they are all based on the same 7 JWPR0-Fabozzi c-cc November, 00. PLEASE SUPPLY PART TITLE 0 0 0 0 0 0-day Volatility 0-day Volatility 0-day Volatility May-00 Sep-00 Jan-0 May-0 Sep-0 Jan-0 May-0 Sep-0 Jan-0 May-0 Sep-0 Jan-0 May-0 Sep-0 Jan-0 May-0 Sep-0 Jan-0 Figure CC. Equally Weighted Moving Average Volatility Estimates of the MIB 0 Index underlying data and the same independent and identically distributed assumption for the returns Other such ghost features are evident later in the period, for instance, in March 00 and March 00. Later on in the period, the choice of look-back period does not make so much difference: The three volatility estimates are all around the 0 level. Case Study: Measuring the Volatility and Correlation of U.S Treasuries The interest rate covariance matrix is an important determinant of the value at risk VaR) of a cash flow. In this section, we show how to estimate the volatilities and correlations of different maturity U.S. zero-coupon interest rates using the equal weighted moving average method. Consider daily data on constant maturity U.S. Treasury rates between January, and March, 00. The rates are graphed in Figure CC. It is evident that rates followed marked trends over the period. From a high of about in, by the end of the 0 0 m m y y y y y0 00 00 00 0 0 0 000 000 000 000 000 Figure CC. U.S. Treasury Rates Source: data.htm. same the short-term rates were below . Also, periods where the term structure of interest rates is relatively flat are interspersed with periods when the term structure is upward sloping, sometimes with the long-term rates being several percent higher than the short-term rates. During the upward sloping yield curve regimes, especially the latter one from 000 to 00, the medium- to long-term interest rates are more volatile than the short-term rates, in absolute terms. However, it is not clear which rates are the most volatile in relative terms, as the short rates are much lower than the medium to long-term rates. There arethreedecisionsthatmustbemade: Decision. How long an historical data period should be used Decision. Which frequency of observations should be used Decision. Should the volatilities and correlations be measured directly on absolute changes in interest rates, or should they be measured on relative changes and then the result converted into absolute terms Decision. How Long a Historical Data Period Should Be Used The equally weighted historical method gives an average volatility, or correlation, over the sample period chosen. The longer the data period, the less relevant that average may be today i.e. at the end of the sample). Looking at Figure CC. it may be thought that data from 000 onward, and possibly also data during the first half of the 0s, are relevant today. However, we may not wish to include data from the latter half of the 0s, when the yield curve was flat. Decision. Which Frequency of Observations Should Be Used This is an important decision, which depends on the end use of the covariance matrix. We can always use the square root of time rule to convert the holding period of a covariance matrix. For instance, a 0-day covariance matrix can be converted into a -day matrix by dividing each element by 0 and it can be converted into an annual covariance matrix by multiplying each element by. However, this conversion is based on the assumption that variations in interest rates are independent and identically distributed. Moreover, the data becomes more noisy when we use high-frequency data. For instance, daily variations may not be relevant if we only ever want to measure covariances over a 0-day period. The extra variation in the daily data is not useful, and the crudeness of the square root of time rule will introduce an error. To avoid the use of crude assumptionsitisbesttouseadatafrequencythatcorresponds to the holding period of the covariance matrix. However, the two decisions above are linked. For instance, if data are quarterly, we need a data period of five or more years otherwise, the standard error of the estimates will be very large. But then our quarterly covariance matrix represents an average over many years that may not be thought of as relevant today. If data are daily, then 8 JWPR0-Fabozzi c-cc November, 00. Moving Average Models for Volatility and Correlation, and Covariance Matrices 0 0 Au: Can head be shortened just one year of data provides plenty of observations to measure the historical model volatilities and correlations accurately. Also, a history of one year is a better representation of today s markets than a history of five or more years. However, if it is a quarterly covariance matrix that we seek, we have to apply the square root of time rule to the daily matrix. Moreover, the daily variations that are captured by the matrix may not be relevant information at the quarterly frequency. In summary, there may be a trade-off between using data at the relevant frequency and using data that are relevant today. It should be noted that such a trade-off between Decisions and above applies to the measurement of risk in all asset classes and not only to interest rates. In interest rates, there is another decision to make before we can measure risk. Since the price value of a basis point PV0) sensitivity vector is usually measured in basis points, an interest rate covariance matrix is also usually expressed in basis points. Hence, we have Decision. Decision. Should the Volatilities and Correlations Be Measured Directly on Absolute Changes in Interest Rates, or Should They Be Measured on Relative Changes and Then the Result Converted into Absolute Terms If rates have been trending over the data period the two approaches are likely to give very different results. One has to make a decision about whether relative changes or absolute changes are the more stable. In these data, for example, an absolute change of basis points in was relatively small, but in 00 it would have represented a very large change. Hence, to estimate an average daily covariance matrix over the entire data sample, it may be more reasonable to suppose that the volatilities and correlations should be measured on relative changes and then converted to absolute terms. Note, however, that a daily matrix based on the entire sample would capture a very long-term average of volatilities and correlations between daily U.S. Treasury rates, indeed it is a -year average that includes several periods of different regimes in interest rates. Such a long-term average, which is useful for long-term forecasts may be better based on lower frequency data e.g. monthly). For a -day forecast horizon. we shall use only the data since January, 000. To make the choice for Decision, we take both the relative daily changes the difference in the log rates) and the absolute daily changes the differences in the rates, in basis-point terms). Then we obtain the standard deviation, correlation, and covariance in each case, and in the case of relative changes we translate the results into absolute terms. We now compare results based on relative changes with result based on absolute changes. The correlation matrix estimates based on the period January, 000, to March, 00, are shown in Table CC. The matrices are similar. Both matrices display the usual characteristics of an interest rate term structure: Correlations are higher at the long end than the short end, and they decrease as the difference between the two maturities increases. Table CC. Correlation of U.S. Treasuries a) Based on Relative Changes m m y y y y y0 m.00 m y y y y y b) Based on Absolute Changes m m y y y y y0 m.00 m y y y y y Table CC. compares the volatilities of the interest rates obtained using the two methods. The figures in the last row of each table represent an average absolute volatility for each rate over period January, 000 to March, 00. Basing this first on relative changes in interest rates, Table CC.a) gives the standard deviation of relative returns volatility in the first row. The long-term rates have the lowest standard deviations, and the medium-term rates have the highest standard deviations. These standard deviations are then annualized by multiplying by, Au: assuming each rate is independent and identically distributed) and multiplied by the level of the interest rate on March, 00. There was a very marked upward sloping yield curve on March, 00. Hence the long-term rates are more volatile than the short-term rates: for instance the -month rate has an absolute volatility of about basis points, but the absolute volatility of the 0-year rates is about basis points. Table CC.b) measures the standard deviation of absolute changes in interest rates over the period January, 000 to March, 00, and then converts this into volatility by multiplying by. We again find that the long- Au: term rates are more volatile than the short-term rates for instance, the six-month rate has an absolute volatility of about basis points, but the absolute volatility of the five-year rates is about 0 bps. It should be noted that it is quite unusual for long-term rates to be more volatile than short-term rates. But from 000 to 00 the U.S. Fed was exerting a lot of control on short-term rates, to bring down the general level of interest rates. However the market expected interest rates to rise, because the yield curve was upwards sloping during most of the period.) We find that correlations were similar, whether based on relative or absolute changes. But Table CC. shows there is a substantial difference between the volatilities obtained using the two methods. When volatilities are based directly on the absolute changes, they are slightly lower at the short end and substantially lower for the medium-term rates. symbol ok symbol ok 9 JWPR0-Fabozzi c-cc November, 00. PLEASE SUPPLY PART TITLE Table CC. Volatility of U.S. Treasuries a) Based on Relative Changes m m y y y y y0 Standard deviation Yield Curve on March, Absolute volatility in basis points) b) Based on Absolute Changes m m y y y y y0 Standard deviation Absolute volatility in basis points) Finally, we obtain the annual covariance matrix of absolute changes in basis point terms) by multiplying the correlation matrix by the appropriate absolute volatilities and to obtain the one-day covariance matrix we divide by. The results are shown in Table CC. Depending on whether we base estimates of volatility and correlation on relative or absolute changes in interest rates, the covariance matrix can be very different. In this case, it is short-term and medium-term volatility estimates that are the most affected by the choice. Given that we have used the equally weighted average methodology to construct the covariance matrix, the underlying assumption is that volatilities and correlations are constant. Hence, the choice between relative or absolute changes depends on which are the more stable. In countries with very high interest rates, or when interest rates have been trending during the sample period, relative changes tend to be more stable than absolute changes. In summary, there are four crucial decisions to be made when estimating a covariance matrix for interest rates. Which statistical model should we employ. Which historical data period should be used Table CC. One-Day Covariance Matrix of U.S. Treasuries, in Basis Points a) Based on Relative Changes m m y y y y y0 m.0 m. y. 0 y y. y y b) Based on Absolute Changes m m y y y y y0 m 0.0 m. y..0. y..0. y. y y Should the data frequency be daily, weekly, monthly or quarterly. Should we base the matrix on relative or absolute changes in interest rates The first three decisions must also be made when estimating covariance matrices in other asset classes such as equities, commodities, and foreign-exchange rates. There is a huge amount of model risk involved with the construction of covariance matrices very different results may be obtained depending on the choice made. Pitfalls of the Equally Weighted Moving Average Method The problems encountered when applying this model stem not from the small jumps that are often encountered in financial asset prices, but from the large jumps that are only rarely encountered. When a long averaging period is used, the importance of a single extreme event is averaged out within a large sample of returns. Hence, a moving average volatility estimate may not respond enough to a short, sharp shock in the market. This effect is clearly visible in 00, where only the 0-day volatility rose significantly over a matter of a few weeks. The longer-term volatilities did rise, but it took several months for them to respond to the market falls in the MIB during mid-00. At this point in time there was actually a cluster of volatility, which often happens in financial markets. The effect of the cluster was to make the longer-term volatilities rise, eventually, but then they took too long to return to normal levels. It was not until markets returned to normal in late 00 that the three volatility series in Figure CC. are in line with each other. When there is an extreme event in the market, even just one very large return will influence the T-day moving average estimate for exactly T days until that very large squared return falls out of the data window. Hence volatility will jump up, for exactly T days, and the fall dramatically on day T , even though nothing happened in the market on that day. This type of ghost feature is simply an artefact of the use of equal weighting. The problem is that extreme events are just as important to current estimates, whether they occurred yesterday or a very long time ago. A single large, squared return remains just as important T days ago as it was yesterday. It will affect the T-day volatility or correlation estimate for exactly 10 JWPR0-Fabozzi c-cc November, 00. 0 Moving Average Models for Volatility and Correlation, and Covariance Matrices 0 Au: symbol ok T days after that return was experienced, and to exactly the same extent. However, with other models we would find that volatility or correlation had long ago returned to normal levels. Exactly T days after the extreme event, the equally weighted moving average volatility estimate mysteriously drops back down to about the correct level that is, provided that we have not had another extreme return in the interim Note that the smaller is T, the number of data points used in the data window, the more variable the historical volatility series will be. When any estimates are based on a small sample size they will not be very precise. The larger the sample size the more accurate the estimate, because sampling errors are proportional to T. For this reason alone a short moving average will be more variable than a long moving average. Hence, a 0-day historic volatility or correlation) will always be more variable than a 0-day historic volatility or correlation) that is based on the same daily return data. Of course, if one really believes in the assumption of constant volatility that underlies this method, one should always use as long a history as possible, so that sampling errors are reduced. It is important to realize that whatever the length of the historical averaging period and whenever the estimate is made, the equally weighted method is always estimating the same parameter: the unconditional volatility or correlation) of the returns. But this is a constant it does not change over the process. Thus, the variation in T-day historic estimates can only be attributed to sampling error: there is nothing else in the model to explain this variation. It is not a time-varying volatility model, even though some users try to force it into that framework. The problem with the equally weighted moving average model is that it tries to make an estimate of a constant volatility into a forecast of a time-varying volatility. Similarly, it tries to make an estimate of a constant correlation into a forecast of a time-varying correlation. No wonder financial firms have lost of lot of money with this model It is really only suitable for long-term forecasts of average volatility, or correlation, for instance over a period of between six months to several years. In this case, the lookback period should be long enough to include a variety of price jumps, with a relative frequency that represents the modeler expectations of the probability of future price jumps of that magnitude during the forecast horizon. Using Equally Weighted Moving Averages To forecast a long-term average for volatility using the equally weighted model, it is standard to use a large sample size T in the variance estimate. The confidence intervals for historical volatility estimators given earlier in this chapter provide a useful indication of the accuracy of these long-term volatility forecasts and the approximate standard errors that we have derived earlier in this chapter give an indication of variability in long-term volatility. Here, we saw that the variability in estimates decreased as the sample size increased. Hence, long-term volatility that is forecast from this model may prove useful. When pricing options, it is the long-term volatility that is most difficult to forecast. Options trading often focuses on short-maturity options and long-term options are much less liquid. Hence, it is not easy to forecast a long-term implied volatility. Long-term volatility holds the greatest uncertainty, yet it is the most important determinant of long-term option prices. We conclude this section with an interesting conundrum, considering two hypothetical historical volatility modellers, whom we shall call Tom and Dick, both forecasting volatility over a -month risk horizon based on equally weighted average of squared returns over the past months of daily data. Imagine that is it January 00 and that on October, 00 the market crashed, returning in the space of a few days. So some very large jumps occurred during the current data window, albeit three months ago. Tom includes these extremely large returns in his data window, so his ex-post average of squared returns, which is also his volatility forecast in this model, will be very high. Because of this, Tom has an implicit belief that another jump of equal magnitude will occur during the forecast horizon. This implicit belief will continue until one year after the crash, when those large negative returns fall out of his moving data window. Consider Tom s position in October 00. Up to the middle of October he includes the crash period in his forecast but after that the crash period drops out of the data window and his forecast of volatility in the future suddenly decreases as if he suddenly decided that another crash was very unlikely. That is, he drastically changes his belief about the possibility of an extreme return. So, to be consistent with his previous beliefs, should Tom now bootstrap the extreme returns experienced during October 00 back into his data set And what about Dick, who in January 00 does not believe that another market crash could occur in his -month forecast horizon So, in January 00, he should somehow filter out those extreme returns from his data. Of course, it is dangerous to embrace the possibility of bootstrapping in and filtering out extreme returns in data in an ad hoc way, before it is used in the model. However, if one does not do this, the historical model can imply a very strange behavior of the beliefs of the modeler. In the Bayesian framework of uncertain volatility the equally weighted model has an important role to play. Equally weighted moving averages can be used to set the bounds for long-term volatility that is, we can use the model to find a range sigma min, sigma max for the long-term average volatility forecast. The lower bound sigma min can be estimated using a long period of historical data with all the very extreme returns removed and the upper bound sigma max can be estimated using the historical data where the very extreme returns are retained and even adding some A modeler s beliefs about long-term volatility can be formalized by a probability distribution over the range sigma min, sigma max . This distribution would then be carried through for the rest of the analysis. For instance, upper and lower price bounds might be obtained for long-term exposures with option like structures, such as warrants on a firm s equity or convertibles bonds. This type of Bayesian method, which provides a price distribution rather than a single price, will be increasingly used in market risk management in the future. 11 JWPR0-Fabozzi c-cc November, 00. PLEASE SUPPLY PART TITLE EXPONENTIALLY WEIGHTED MOVING AVERAGES An exponentially weighted moving average EWMA) avoids the pitfalls explained in the previous section because it puts more weight on the more recent observations. Thus as extreme returns move further into the past as the data window slides along, they become less important in the average. Statistical Methodology An exponentially weighted moving average can be defined on any time series of data. Say that on date t we have recorded data up to time t, so we have observations x t. x ). The exponentially weighted average of these observations is defined as: EWMAx t. x ) x t lambdax t lambda x t . lambda t x lambda lambda . lambda t where lambda is a constant, 0 ltlambdalt, called the smoothing or the decay constant. Since lambda T 0asT the exponentially weighted average places negligible weight on observations far in the past. And since lambda lambda . lambda) we have, for large t, EWMAx t. x ) x t lambdax t lambda x t lambda lambda . lambda) i lambda x t i This is the formula that is used to calculate exponentially weight moving average EWMA) estimates of variance with x being the squared return) and covariance with x being the cross product of the two returns). As with equally weighted moving averages, it is standard to use squared daily returns and cross products of daily returns, not in mean deviation form. That is: and circsigma t lambda) circsigma,t lambda) lambda i rt i i lambda i r,t i r,t i i CC.) CC.0) The above formulae may be rewritten in the form of recursions, more easily used in calculations: circsigma t lambda) rt lambda circsigma t CC.) and circsigma,t lambda) r,t r,t lambda circsigma,t CC.) An alternative notation used for the above is V lambda r t ), for circsigma t and COV lambda r,t, r,t ) for circsigma,t when we want to make explicit the dependence on the smoothing constant. One converts the variance to volatility by taking the annualized square root, the annualizing constant being determined by the data frequency as usual. Note that for the EWMA correlation the covariance is divided by the square root of the product of the two EWMA variance estimates, all with the same value of lambda. Similarly for the EWMA beta the covariance between the stock or portfolio) returns and the market returns is divided by the EWMA estimate for the market variance, both with the same value of lambda. That is: circ t,lambda COV lambdar,t, r,t ) CC.) Vlambda r,t )V lambda r,t ) and circbeta t,lambda COV lambdax t, Y t ) V lambda X t ) Interpretation of lambda CC.) There are two terms on the right hand side of CC.). The first term lambda) rt determines the intensity of reaction of volatility to market events: the smaller is lambda the more the volatility reacts to the market information in yesterday s return. The second term lambda circsigma t determines the persistence in volatility: Irrespective of what happens in the market, if volatility was high yesterday it will be still be high today. The closer that lambda is to, the more persistent is volatility following a market shock. Thus, a high lambda gives little reaction to actual market events but great persistence in volatility, and a low lambda gives highly reactive volatilities that quickly die away. An unfortunate restriction of exponentially weighted moving average models is that the reaction and persistence parameters are not independent: the strength of reaction to market events is determined by lambda, whilst the persistence of shocks is determinedby lambda. But this assumption is not empirically justified except perhaps in a few markets e.g. major U.S. dollar exchange rates). The effect of using a different value of lambda in EWMA volatility forecasts can be quite substantial. Figure CC. compares two EWMA volatility estimatesforecasts of the SampP 00 index, with lambda 0.0 and lambda 0. It is not 0 0 0 0 0 0 EWMA 0.0) Volatility EWMA 0.) Volatility May-00 Sep-00 Jan-0 May-0 Sep-0 Jan-0 May-0 Sep-0 Jan-0 May-0 Sep-0 Jan-0 May-0 Sep-0 Jan-0 May-0 Sep-0 Jan-0 Figure CC. Different lambdas EWMA Volatility Estimates for SP00 with 12 JWPR0-Fabozzi c-cc November, 00. Moving Average Models for Volatility and Correlation, and Covariance Matrices unusual for these two EWMA estimates to differ by as much as 0. So which is the best value to use for the smoothing constant How should we choose lambda This is not an easy question. By contrast, in generalized autoregressive conditional heteroskedascity GARCH) models there is no question of how we should estimate parameters, because maximum likelihood estimation is an optimal method that always gives consistent estimators.) Statistical methods may considered: For example, lambda could be chosen to minimize the root mean square error between the EWMA estimate of variance and the squared return. But, in practice, lambda is often chosen subjectively because the same value of lambda has to be used for all elements in a EWMA covariance matrix. As a rule of thumb, we might take values of lambda between about 0. volatility is highly reactive but has little persistence) and 0. volatility is very persistent but not highly reactive) 0 0 0 0 0 0 Properties of the Estimates A EWMA volatility estimate will react immediately following an unusually large return then the effect of this return on the EWMA volatility estimate gradually diminishes over time. The reaction of EWMA volatility estimates to market events therefore persists over time, and with a strength that is determined by the smoothing constant lambda. The larger the value of lambda, the more weight is placed on observations in the past and so the smoother the series becomes. Figure CC. compares the EWMA volatility of the MIB index with lambda 0. and the 0-day equally weighted volatility estimate. The difference between the two estimators is marked following an extreme market return. The EWMA estimate gives a higher volatility than the equally weighted estimate, but it returns to normal levels faster than the equally weighted estimated because it does not suffer from the ghost features discussed above. One of the disadvantages of using EWMA to estimate and forecast covariance matrices is that the same value of EWMA 0.) Volatility 0-day Volatility May-00 Sep-00 Jan-0 May-0 Sep-0 Jan-0 May-0 Sep-0 Jan-0 May-0 Sep-0 Jan-0 May-0 Sep-0 Jan-0 May-0 Sep-0 Jan-0 Figure CC. EWMA versus Equally Weighted Volatility lambda is used for all the variances and covariances in the matrix. For instance, in a large matrix covering several asset classes, the same lambda applies to all equity indices, foreign exchange rates, interest rates, andor commodities in the matrix. But why should all these risk factors have similar reaction and persistence to shocks This constraint is commonly applied merely because it guarantees that the matrix will be positive semidefinite. The EWMA Forecasting Model The exponentially weighted average variance estimate CC.), or in its equivalent form CC.) is just a methodology for calculating circsigma t.thatis,itgivesavarianceesti- mate at any point in time but there is no model as such, that explains the behaviour of the variance of returns, sigmat at each time t. In this sense, we have to distinguish EWMA from a GARCH model, which starts with a proper specification of the dynamics of sigmat and then proceeds to estimate the parameters of this model. Without a proper model, it is not clear how we should turn our current estimate of variance into a forecast of variance over some future horizon. One possibility is to augment CC.) by assuming it is the estimate associated with the model sigmat lambda) rt lambdasigma t r t I t N 0,sigmat ) CC.) An alternative is to assume a constant volatility, so the fact that our estimates are time varying is merely due to sampling error. In that case any EWMA variance forecast must be constant and equal to the current EWMA estimate. Similar remarks apply to the EWMA covariance, this time regarding EWMA as a simplistic version of bivariate normal GARCH. Similarly, the EWMA volatility or correlation) forecast for all risk horizons is simply set at the current EWMA estimate of volatility or correlation). The base horizon for the forecast is given by the frequency of the data daily returns will give the one-day covariance matrix forecast, weekly returns will give the one-week covariance matrix forecast, and so forth. Then, since the returns are independent and identically distributed, the square root of time rule applies. So we can convert a oneday forecast into an h-day covariance matrix forecast by multiplying each element of the one-day EWMA covariance matrix by h. Since the choice of lambda itself quite ad hoc, as discussed above, some users choose different values of lambda for forecasting over different horizons. For instance, as discussed later in this chapter, in the RiskMetrics TM methodolgy a relative low value of lambda is used for short-term forecasts and a higher value of lambda is used for long-term forecasts. However, this is purely an ad hoc rule. Standard Errors for EWMA Forecasts In the previous section, we justified the assumption that the underlying returns are normally and independently distributed with mean zero and variance sigma. That is, for 13 JWPR0-Fabozzi c-cc November, 00. PLEASE SUPPLY PART TITLE Now we can apply the variance operator to and calculate the variance of the EWMA variance estimator as: V circsigma t ) lambda) lambda ) V rt ) lambda lambda sigma CC.) For instance, as a percentage of the variance, the standard error of the EWMA variance estimator is about when lambda 0. 0. when lambda 0. and. when lambda 0. A single point forecast of volatility can be very misleading. A forecast is always a distribution. It represents our uncertainty over the quantity that is being forecast. The standard error of a volatility forecast is useful because it can be translated into a standard error for a VaR estimate, for instance, or an option price. In any VaR model one should be aware of the uncertainty that is introduced by possible errors in the forecast of the covariance matrix. Similarly, in any mark-to-model value of an option, one should be aware of the uncertainty that is introduced by possible errors in the volatility forecast. Au: Pls. 0 complete this sentence all t E r t ) 0 and V rt ) E r t ) sigma In this section, we use this assumption to obtain standard errors for EWMA forecasts. From the above, and further from the normality assumption, we have: V rt ) ) ) E r t E r t sigma sigma sigma The RiskMetrics TM Methodology Three very large covariance matrices, each based on a different moving average methodology, are available from These matrices cover all types of assets including government bonds, money markets, swaps, foreign exchange, and equity indices for currencies and commodities. Subscribers have access to all of these matrices updated on a daily basis and end-of-year matrices are also available to subscribers wishing to use them in scenario analysis. After a few days, the datasets are also made available free for educational use. The RiskMetrics TM group is the market leader in market and credit risk data and modeling for banks, corporates asset managers, and financial intermediaries. It is highly recommended that readers visit the web site where they will find a surprising large amount of information in the form of free publications and data. See the References at the end of this chapter for details. The three covariance matrices provided by the RiskMetrics group are each based on a history of daily returns in all the asset classes mentioned above. They are. Regulatory matrix: This takes it name from the unfortunate) requirement that banks must use at least days of historical data for VaR estimation. Hence this metric is an equally weighted average matrix with n . The volatilities and correlations constructed from this matrix represent forecasts of average volatility or correlation) over the next days. 0 0 0 0 0 Jan- Jan- Daily EWMA Volatility Monthly EWMA Volatility Regulatory Volatility Jan- Jan- Jan-00 Jan-0 Jan-0 Jan-0 Jan-0 Jan-0 Jan-0 Figure CC. Comparison of the RiskMetrics Forecasts for FTSE00 Volatility. Daily matrix: This is an EWMA covariance matrix with lambda 0. for all elements. It is not dissimilar to an equally weighted average with n , except that it does not suffer from the ghost features caused by very extreme market events. The volatilities and correlations constructed from this matrix represent forecasts of average volatility or correlation) over the next day. Monthly matrix: This is an EWMA covariance matrix with lambda 0. for all elements and then multiplied by i.e. using the square root of time rule and assuming days per month). The volatilities and correlations constructed from this matrix represent forecasts of average volatility or correlation) over the next days. The main difference between the three different methods is evidenced following major market movements: The regulatory forecast will produce a ghost effect of this event, and does not react as much as the daily or monthly forecasts. The most reactive is the daily forecast, but it also has less persistence than the monthly forecast. Figure CC. compares the estimates for the FTSE 00 volatility based on each of the three RiskMetrics methodologies and using daily data from January. to June, 00. As mentioned earlier in this chapter, these estimates are assumed to be the forecasts over, respectively, one day, one month, and one year. In volatile times, the daily and monthly estimates lie well above the regulatoryforecastandtheconverseistrueinmoretranquil periods. For instance, during most of 00, the regulatory estimate of average volatility over the next year was about 0 higher than both of the shorter-term estimates. However, it was falling dramatically during this period, and indeed the regulatory forecast of more than 0 volatility on average between June 00 and June 00 was entirely wrong. However, at the end of the period, in June 00, the daily forecasts were above 0, and the monthly forecasts were only just below this. However, the regulatory forecast over the next year was only slightly more than 0. During periods when the markets have been tranquil for some time, for instance during the whole of 00, the 14 JWPR0-Fabozzi c-cc November, 00. Moving Average Models for Volatility and Correlation, and Covariance Matrices 0 three forecasts tend to agree more. But during and directly after a volatile period there are large differences between the regulatory forecasts and the two EWMA forecasts, and these differences are very difficult to justify. Neither the equally weighted average nor the EWMA methodology is based on a proper forecasting model. One simply assumes the current estimate is the volatility forecast. But the current estimate is a backward-looking measure based on recent historical data. So both of these moving average models make the assumption that the behavior of future volatility is the same as its past behavior and this is a very simplistic view SUMMARY The equally weighted moving average, or historical approach to estimatingforecasting volatilities and correlations, was the only statistical method used by practitioners until the mid-0s. The historical method may provide a useful indication of the possible range for a long-term average, such as the average volatility or correlation over the next several years. However, its application to shortterm forecasting is very limited, indeed the approach suffers from at least four drawbacks. First, the forecast of volatilitycorrelation over all future horizons is simply taken to be the current estimate of volatility, because the underlying assumption in the model is that returns are independent and identically distributed. Second, the only choice facing the user is on the data points to use in the data window. The forecasts produced depend crucially on this decision, yet there is no statistical procedure to choose the size of data window it is a purely subjective decision. Third, following an extreme market move the forecasts of volatility and correlation will exhibit a so-called ghost feature of that extreme move, which will severely bias the volatility and correlation forecasts upward. Finally, the extent of this bias depends very much on the size of the data window. The bias issue was addressed by J. P. Morgan bank, which launched the RiskMetrics TM data and software suite in the mid-0s. The bank s choice of methodology helped to popularize the use of exponentially weighted moving averages EWMA) by financial analysts. The EWMA approach provides useful forecasts for volatility and correlation over the very short term, such as over the new day or week. However, its use for longer-term 0 forecasting is limited, and this methodology also has two major problems. First, the forecast of volatilitycorrelation over all future horizons is simply taken to be the current estimate of volatility, because the underlying assumption in the model is that returns are independent and identically distributed. Second, the only choice facing the user is aboutthe value ofthe smoothing constant,lambda. The forecasts produced depend crucially on this decision, yet there is no statistical procedure to choose lambda. Often an ad hoc choice is made for example, the same lambda is taken for all series and a higher lambda is chosen for a longer-term forecast. Moving average models assume returns are independent and identically distributed, and the further assumption that they are normally distributed allows one to derive standard errors and confidence intervals for moving average forecasts. But empirical observations suggest that returns to financial assets are hardly ever independent and identically, let alone normally distributed. For these reasons more and more practitioners are basing their forecasts on generalized autoregressive conditional heteroskedasticity GARCH) models. There is no doubt that such models produce superior volatility forecasts. It is only in GARCH models that the term structure volatility forecasts converge to the long run average volatility the other models produce constant volatility term structures. Moreover, the value of the EWMA smoothing constant is chosen subjectively and the same smoothing constant must be used for all the returns, otherwise the covariance matrix need not be positive semi-definite. But GARCH parameters are estimated optimally and GARCH covariance matrices truly reflect the time-varying volatilities and correlations of the multivariate returns distributions. REFERENCES Alexander, C. 00). Market Risk Analysis. Chichester, UK: John Wiley amp Sons. Freund, J. E. ). Mathematical Statistics. Englewood Cliffs: Pearson U.S. Imports amp PHIPEs. RiskMetrics ). RiskMetrics Technical Document, RiskMetrics ). Risk Management A Practical Guide, RiskMetrics 00). Return to RiskMetrics: The Evolution of astandardriskmetricsrrovv.html.Volume 1 by Frank J. Fabozzi Moving Average Models for Volatility and Correlation, and Covariance Matrices CAROL ALEXANDER, PhD Professor of Finance, University of Sussex Abstract: The volatilities and correlations of the returns on a set of assets, risk factors, or interest rates are summarized in a covariance matrix. This matrix lies at the heart of risk and return analysis. It contains all the information necessary to estimate the volatility of a portfolio, to simulate correlated values for its risk factors, to diversify investments, and to obtain efficient portfolios that have the optimal trade-off between risk and return. Both risk managers and asset managers require covariance matrices that may include very many assets or risk factors. For instance, in a global risk management system of a large international bank all the major yield curves, equity indexes, foreign exchange rates, and commodity prices will be encompassed in one very large dimensional covariance matrix. Variances and covariances are parameters of the joint distribution of asset (or risk factor) returns. It is important to understand that they are unobservable. They can only be estimated or forecast within the context of a model. Continuous-time models, used for option pricing, are often based on stochastic processes for the variance and covariance. Discrete-time models, used for measuring portfolio risk, are based on time series models for variance and covariance. In each case, we can only ever estimate or forecast variance and covariance. With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more. No credit card required
Moving-average-indicator-forex-trading
Trade-options-with-play-money