-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathchapter6.html
More file actions
1058 lines (1040 loc) · 157 KB
/
chapter6.html
File metadata and controls
1058 lines (1040 loc) · 157 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Another Book on Data Science - Optimization in Practice</title>
<meta name="description" content="data science, R, Python, programming, machine learning">
<meta name="author" content="Nailong Zhang">
<!-- Le HTML5 shim, for IE6-8 support of HTML elements -->
<!--[if lt IE 9]>
<script src="https://html5shim.googlecode.com/svn/trunk/html5.js"></script>
<![endif]-->
<!-- Le styles -->
<link rel="stylesheet" href="bootstrap-1.1.0.min.css">
<link rel="stylesheet" href="style.css">
<link rel="stylesheet" href="small-screens.css">
<link rel="stylesheet" href="vs.css">
<link rel="stylesheet" href="code.css">
<link rel="stylesheet" href="application.css">
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-142297640-1', 'anotherbookondatascience.com');
ga('send', 'pageview');
</script>
</head>
<body>
<div class="topbar">
<div class="fill">
<div class="container-fluid fixed">
<h3><a href="index.html">Optimization in Practice</a></h3>
<ul class="nav secondary-nav">
<li><a href="chapter5.html">«Previous</a></li>
<li><a href="chapter7.html">Next»</a></li>
</ul>
</div>
</div>
</div>
<div class="container-fluid" style="padding-top: 60px;">
<p>Sections in this Chapter:</p>
<ul>
<li><a href="#convexity">Convexity</a></li>
<li><a href="#gd">Gradient descent</a></li>
<li><a href="#root">Root-finding</a></li>
<li><a href="#tools">General purpose minimization tools</a></li>
<li><a href="#lp">Linear programming</a></li>
<li><a href="#miscellaneous">Miscellaneous</a></li>
</ul>
<p>We are talking about mathematical optimization in this chapter. Mathematical optimization is the selection of a best element from some set of available alternatives. Why we talk about optimization in this book on data science? Data science is used to help make better decisions, and so is optimization. Operations Research, with optimization at its core, is a discipline that deals with the application of advanced analytical methods to make better decisions<sup class="footnote" id="fnr1"><a href="#fn1">1</a></sup>. I always feel Data Science and Operations Research are greatly overlapped. The training of a variety of (supervised) machine learning models is to minimize the loss functions, which is essentially an optimization problem. In optimization, the loss functions are usually referred as objective functions.</p>
<p>Mathematical optimization is a very broad field and we ignore the theories in the book and only touch some of the simplest applications. Actually, in Chapter 4 we have seen one of its important applications in linear regression.</p>
<h2 id="convexity">Convexity</h2>
<p>The importance of convexity cannot be overestimated. A convex optimization problem<sup class="footnote" id="fnr2"><a href="#fn2">2</a></sup> has the following form</p>
<p>$$<br />
\begin{equation}<br />
\begin{split}<br />
&\max_{\boldsymbol{x}} {f_0(\boldsymbol{x})}\\<br />
&subject\quad to\quad f_i(\boldsymbol{x})\leq{\boldsymbol{b}_i};i=1,…,m,<br />
\end{split}<br />
\label{eq:convex}<br />
\end{equation}<br />
$$</p>
<p>where the vector $\boldsymbol{x}\in\mathbf{R}^n$ represents the decision variable; $f_i;i=1,…,m$ are are convex functions ($\mathbf{R}^n\rightarrow\mathbf{R}$). A function $f_i$ is convex if</p>
<p>$$<br />
\begin{equation}\label{eq:convex2}<br />
f_i(\alpha\boldsymbol{x}+(1-\alpha)\boldsymbol{y})\leq {\alpha f_i(\boldsymbol{x})+ (1-\alpha) f_i(\boldsymbol{y})},<br />
\end{equation}<br />
$$</p>
<p>for all $\boldsymbol{x}$, $\boldsymbol{y}$ $\in\mathbf{R}^n$, and all $\alpha$ $\in\mathbf{R}$ with $0\leq\alpha\leq1$. \eqref{eq:convex} implies that a convex optimization problem requires both the objective function and the set of feasible solutions are convex. Why do we care the convexity? Because convex optimization problem has a very nice property – a local minimum (which is minimal among a neighboring set of candidate solutions) is also a global minimum (which is the minimal solution among all feasible solutions). Thus, if we can find a local minimum we get the global minimum for convex optimization problems.</p>
<p>Many methods/tools we will introduce in the following sections won’t throw an error if you feed them a non-convex objective function; but the solution returned by these numerical optimization methods are not necessary to be a global optimum. For example, the algorithm may get trapped in a local optimum.</p>
<figure class="text-center">
<img src="figures/nonconvex.png" alt="A non-convex function $f$. Its local optimum $f(x_2)$ is a global optimum, but the local optimum $f(x_1)$ is not." style="display: block;margin: auto;" width="50%">
<figcaption class="centerfigcaption">A non-convex function $f$. Its local optimum $f(x_2)$ is a global optimum, but the local optimum $f(x_1)$ is not.</figcaption>
</figure>
<h2 id="gd">Gradient descent</h2>
<p>What is the most commonly used optimization algorithm in machine learning? Probably the answer is gradient descent (and its variants). The gradient descent is also an iterative method, and in step $i$ the update of $\boldsymbol{x}$ follows</p>
<p>$$<br />
\begin{equation}<br />
\begin{split}<br />
\boldsymbol{x}^{(k+1)}=\boldsymbol{x}^{(k)}-\gamma\nabla f(\boldsymbol{x}^{(k)}),<br />
\end{split}<br />
\label{eq:5_gradient_descent}<br />
\end{equation}<br />
$$</p>
<p>where $f$ is the objective function and $\gamma$ is called the step size or learning rate. Start from an initial value $\boldsymbol{x}_0$ and follow \eqref{eq:5_gradient_descent}, we will have a monotonic sequence $f(\boldsymbol{x}^{(0)})$, $f(\boldsymbol{x}^{(1)})$, …, $f(\boldsymbol{x}^{(n)})$ in the sense that $f(\boldsymbol{x}^{(k)})>=f(\boldsymbol{x}^{(k+1)})$. When the problem is convex, $f(\boldsymbol{x}^{(i)})$ converges to the global minimum.</p>
<figure class="text-center">
<img src="figures/gd.png" alt="Apply gradient descent step by step for minimization." style="display: block;margin: auto;" width="40%">
<figcaption class="centerfigcaption">Apply gradient descent step by step for minimization.</figcaption>
</figure>
<p>Many machine learning models can be solved by the gradient descent algorithm, for example, the linear regression (with or without penalty) introduced in Chapter 5. Let’s see how to use the vanilla (standard) gradient descent method in linear regression we introduced in Chapter 5.</p>
<p>According to the gradient derived in Chapter 5, the update for the parameters become:</p>
<p>$$<br />
\begin{equation}<br />
\begin{split}<br />
\boldsymbol{\hat{\beta}}^{(k+1)}=\boldsymbol{\hat{\beta}}^{(k)}+2\gamma \boldsymbol{X}’(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\hat{\beta}}^{(k)}).<br />
\end{split}<br />
\end{equation}<br />
$$</p>
<language>R</language>
<p>chapter6/linear_regression_gradient_descent.R<br />
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="kn">library</span><span class="p">(</span>R6<span class="p">)</span>
<span class="lineno"> 2 </span>LR_GD <span class="o">=</span> R6Class<span class="p">(</span>
<span class="lineno"> 3 </span> <span class="s">"LR_GD"</span><span class="p">,</span>
<span class="lineno"> 4 </span> public <span class="o">=</span> <span class="kt">list</span><span class="p">(</span>
<span class="lineno"> 5 </span> coef <span class="o">=</span> <span class="kc">NULL</span><span class="p">,</span>
<span class="lineno"> 6 </span> learning_rate <span class="o">=</span> <span class="kc">NULL</span><span class="p">,</span>
<span class="lineno"> 7 </span> x <span class="o">=</span> <span class="kc">NULL</span><span class="p">,</span>
<span class="lineno"> 8 </span> y <span class="o">=</span> <span class="kc">NULL</span><span class="p">,</span>
<span class="lineno"> 9 </span> seed <span class="o">=</span> <span class="kc">NULL</span><span class="p">,</span>
<span class="lineno">10 </span> initialize <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>x<span class="p">,</span> y<span class="p">,</span> learning_rate <span class="o">=</span> <span class="m">0.001</span><span class="p">,</span> seed <span class="o">=</span> <span class="m">42</span><span class="p">)</span> <span class="p">{</span>
<span class="lineno">11 </span> <span class="c1"># we add 1s to x</span>
<span class="lineno">12 </span> self<span class="o">$</span>x <span class="o">=</span> <span class="kp">cbind</span><span class="p">(</span><span class="m">1</span><span class="p">,</span> x<span class="p">)</span>
<span class="lineno">13 </span> self<span class="o">$</span>y <span class="o">=</span> y
<span class="lineno">14 </span> self<span class="o">$</span>seed <span class="o">=</span> seed
<span class="lineno">15 </span> <span class="kp">set.seed</span><span class="p">(</span>self<span class="o">$</span>seed<span class="p">)</span>
<span class="lineno">16 </span> <span class="c1"># we use a fixed learning rate here, but it can also be adaptive</span>
<span class="lineno">17 </span> self<span class="o">$</span>learning_rate <span class="o">=</span> learning_rate
<span class="lineno">18 </span> <span class="c1"># let's initialize the coef</span>
<span class="lineno">19 </span> self<span class="o">$</span>coef <span class="o">=</span> runif<span class="p">(</span><span class="kp">ncol</span><span class="p">(</span>self<span class="o">$</span>x<span class="p">))</span>
<span class="lineno">20 </span> <span class="p">},</span>
<span class="lineno">21 </span> fit <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>max_iteration <span class="o">=</span> <span class="m">1000</span><span class="p">)</span> <span class="p">{</span>
<span class="lineno">22 </span> <span class="kr">for</span> <span class="p">(</span>i <span class="kr">in</span> <span class="m">1</span><span class="o">:</span>max_iteration<span class="p">)</span> <span class="p">{</span>
<span class="lineno">23 </span> self<span class="o">$</span>update<span class="p">()</span>
<span class="lineno">24 </span> <span class="p">}</span>
<span class="lineno">25 </span> <span class="p">},</span>
<span class="lineno">26 </span> gradient <span class="o">=</span> <span class="kr">function</span><span class="p">()</span> <span class="p">{</span>
<span class="lineno">27 </span> y_hat <span class="o">=</span> self<span class="o">$</span>predict<span class="p">(</span>self<span class="o">$</span>x<span class="p">)</span>
<span class="lineno">28 </span> <span class="c1"># we normalize the gradient by the sample size</span>
<span class="lineno">29 </span> <span class="o">-</span> <span class="m">2</span> <span class="o">*</span> <span class="p">(</span><span class="kp">t</span><span class="p">(</span>self<span class="o">$</span>x<span class="p">)</span> <span class="o">%*%</span> <span class="p">(</span>self<span class="o">$</span>y <span class="o">-</span> y_hat<span class="p">))</span> <span class="o">/</span> <span class="kp">nrow</span><span class="p">(</span>self<span class="o">$</span>x<span class="p">)</span>
<span class="lineno">30 </span> <span class="p">},</span>
<span class="lineno">31 </span> update <span class="o">=</span> <span class="kr">function</span><span class="p">()</span> <span class="p">{</span>
<span class="lineno">32 </span> self<span class="o">$</span>coef <span class="o">=</span> self<span class="o">$</span>coef <span class="o">-</span> self<span class="o">$</span>learning_rate <span class="o">*</span> self<span class="o">$</span>gradient<span class="p">()</span>
<span class="lineno">33 </span> <span class="p">},</span>
<span class="lineno">34 </span> predict <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>new_x<span class="p">)</span> <span class="p">{</span>
<span class="lineno">35 </span> new_x <span class="o">%*%</span> self<span class="o">$</span>coef
<span class="lineno">36 </span> <span class="p">}</span>
<span class="lineno">37 </span> <span class="p">)</span>
<span class="lineno">38 </span><span class="p">)</span></code></pre></figure></p>
<p>Accordingly, we have the Python implementation as follows.</p>
<language>Python</language>
<p>chapter6/linear_regression_gradient_descent.py<br />
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="k">class</span> <span class="nc">LR_GD</span><span class="p">:</span>
<span class="lineno"> 2 </span> <span class="sd">"""</span>
<span class="lineno"> 3 </span><span class="sd"> linear regression with vanilla gradient descent</span>
<span class="lineno"> 4 </span><span class="sd"> """</span>
<span class="lineno"> 5 </span>
<span class="lineno"> 6 </span> <span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="n">learning_rate</span><span class="o">=</span><span class="mf">0.005</span><span class="p">,</span> <span class="n">seed</span><span class="o">=</span><span class="mi">42</span><span class="p">):</span>
<span class="lineno"> 7 </span> <span class="c1"># we put 1s in the first column of x</span>
<span class="lineno"> 8 </span> <span class="bp">self</span><span class="o">.</span><span class="n">x</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">hstack</span><span class="p">((</span><span class="n">np</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="n">x</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="mi">1</span><span class="p">)),</span> <span class="n">x</span><span class="p">))</span>
<span class="lineno"> 9 </span> <span class="bp">self</span><span class="o">.</span><span class="n">y</span> <span class="o">=</span> <span class="n">y</span><span class="p">[:,</span> <span class="kc">None</span><span class="p">]</span>
<span class="lineno">10 </span> <span class="bp">self</span><span class="o">.</span><span class="n">seed</span> <span class="o">=</span> <span class="n">seed</span>
<span class="lineno">11 </span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">seed</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">seed</span><span class="p">)</span>
<span class="lineno">12 </span> <span class="bp">self</span><span class="o">.</span><span class="n">coef</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="n">size</span><span class="o">=</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">x</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="mi">1</span><span class="p">))</span>
<span class="lineno">13 </span> <span class="bp">self</span><span class="o">.</span><span class="n">learning_rate</span> <span class="o">=</span> <span class="n">learning_rate</span>
<span class="lineno">14 </span>
<span class="lineno">15 </span> <span class="k">def</span> <span class="nf">predict</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">new_x</span><span class="p">):</span>
<span class="lineno">16 </span> <span class="c1"># let's use @ for matrix multiplication</span>
<span class="lineno">17 </span> <span class="k">return</span> <span class="n">new_x</span> <span class="o">@</span> <span class="bp">self</span><span class="o">.</span><span class="n">coef</span>
<span class="lineno">18 </span>
<span class="lineno">19 </span> <span class="k">def</span> <span class="nf">gradient</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="lineno">20 </span> <span class="n">y_hat</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">x</span><span class="p">)</span>
<span class="lineno">21 </span> <span class="k">return</span> <span class="o">-</span><span class="mf">2.0</span><span class="o">*</span><span class="bp">self</span><span class="o">.</span><span class="n">x</span><span class="o">.</span><span class="n">T</span> <span class="o">@</span> <span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">y</span><span class="o">-</span><span class="n">y_hat</span><span class="p">)</span><span class="o">/</span><span class="bp">self</span><span class="o">.</span><span class="n">x</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="lineno">22 </span>
<span class="lineno">23 </span> <span class="k">def</span> <span class="nf">update</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="lineno">24 </span> <span class="bp">self</span><span class="o">.</span><span class="n">coef</span> <span class="o">-=</span> <span class="bp">self</span><span class="o">.</span><span class="n">learning_rate</span><span class="o">*</span><span class="bp">self</span><span class="o">.</span><span class="n">gradient</span><span class="p">()</span>
<span class="lineno">25 </span>
<span class="lineno">26 </span> <span class="k">def</span> <span class="nf">fit</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">max_iteration</span><span class="o">=</span><span class="mi">1000</span><span class="p">):</span>
<span class="lineno">27 </span> <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">max_iteration</span><span class="p">):</span>
<span class="lineno">28 </span> <span class="bp">self</span><span class="o">.</span><span class="n">update</span><span class="p">()</span></code></pre></figure></p>
<p>Let’s use simulated dataset to test our algorithm.</p>
<div class="codewrapper">
<div class="codeleft">
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="o">></span> <span class="kn">source</span><span class="p">(</span><span class="s">'gradient_descent.R'</span><span class="p">)</span>
<span class="lineno"> 2 </span><span class="o">></span> n <span class="o">=</span> <span class="m">1000</span>
<span class="lineno"> 3 </span><span class="o">></span> <span class="kp">set.seed</span><span class="p">(</span><span class="m">42</span><span class="p">)</span>
<span class="lineno"> 4 </span><span class="o">></span> x <span class="o">=</span> <span class="kp">cbind</span><span class="p">(</span>rnorm<span class="p">(</span>n<span class="p">),</span> runif<span class="p">(</span>n<span class="p">))</span>
<span class="lineno"> 5 </span><span class="o">></span> b <span class="o">=</span> <span class="m">6</span> <span class="c1"># intercept</span>
<span class="lineno"> 6 </span><span class="o">></span> coef <span class="o">=</span> <span class="kt">c</span><span class="p">(</span><span class="m">2</span><span class="p">,</span> <span class="m">3</span><span class="p">)</span>
<span class="lineno"> 7 </span><span class="o">></span> err <span class="o">=</span> runif<span class="p">(</span>n<span class="p">,</span> <span class="m">0</span><span class="p">,</span> <span class="m">0.2</span><span class="p">)</span>
<span class="lineno"> 8 </span><span class="o">></span> y <span class="o">=</span> x <span class="o">%*%</span> coef <span class="o">+</span> b <span class="o">+</span> err
<span class="lineno"> 9 </span><span class="o">></span> lr_gd <span class="o">=</span> LR_GD<span class="o">$</span>new<span class="p">(</span>x <span class="o">=</span> x<span class="p">,</span> y <span class="o">=</span> y<span class="p">,</span> learning_rate <span class="o">=</span> <span class="m">0.05</span><span class="p">)</span>
<span class="lineno">10 </span><span class="o">></span> lr_gd<span class="o">$</span>fit<span class="p">(</span><span class="m">1000</span><span class="p">)</span>
<span class="lineno">11 </span><span class="o">></span> lr_gd<span class="o">$</span>coef
<span class="lineno">12 </span> <span class="p">[,</span><span class="m">1</span><span class="p">]</span>
<span class="lineno">13 </span><span class="p">[</span><span class="m">1</span><span class="p">,]</span> <span class="m">6.106233</span>
<span class="lineno">14 </span><span class="p">[</span><span class="m">2</span><span class="p">,]</span> <span class="m">2.001042</span>
<span class="lineno">15 </span><span class="p">[</span><span class="m">3</span><span class="p">,]</span> <span class="m">2.992593</span></code></pre></figure></div>
<div class="coderight">
<language>Python</language>
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="o">>>></span> <span class="kn">from</span> <span class="nn">gradient_descent</span> <span class="k">import</span> <span class="n">LR_GD</span>
<span class="lineno"> 2 </span><span class="o">>>></span> <span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="lineno"> 3 </span><span class="o">>>></span> <span class="n">n</span> <span class="o">=</span> <span class="mi">1000</span>
<span class="lineno"> 4 </span><span class="o">>>></span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">seed</span><span class="p">(</span><span class="mi">42</span><span class="p">)</span>
<span class="lineno"> 5 </span><span class="o">>>></span> <span class="n">x</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">column_stack</span><span class="p">((</span><span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">normal</span><span class="p">(</span><span class="n">size</span><span class="o">=</span><span class="n">n</span><span class="p">),</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="n">size</span><span class="o">=</span><span class="n">n</span><span class="p">)))</span>
<span class="lineno"> 6 </span><span class="o">>>></span> <span class="n">b</span> <span class="o">=</span> <span class="mi">6</span> <span class="c1"># intercept</span>
<span class="lineno"> 7 </span><span class="o">>>></span> <span class="n">coef</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">])</span>
<span class="lineno"> 8 </span><span class="o">>>></span> <span class="n">err</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="n">size</span><span class="o">=</span><span class="n">n</span><span class="p">,</span> <span class="n">low</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">high</span><span class="o">=</span><span class="mf">0.2</span><span class="p">)</span>
<span class="lineno"> 9 </span><span class="o">>>></span> <span class="n">y</span> <span class="o">=</span> <span class="n">x</span> <span class="o">@</span> <span class="n">coef</span> <span class="o">+</span> <span class="n">b</span> <span class="o">+</span> <span class="n">err</span>
<span class="lineno">10 </span><span class="o">>>></span> <span class="n">lr_gd</span> <span class="o">=</span> <span class="n">LR_GD</span><span class="p">(</span><span class="n">x</span><span class="o">=</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="o">=</span><span class="n">y</span><span class="p">,</span> <span class="n">learning_rate</span><span class="o">=</span><span class="mf">0.05</span><span class="p">)</span>
<span class="lineno">11 </span><span class="o">>>></span> <span class="n">lr_gd</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span><span class="mi">1000</span><span class="p">)</span>
<span class="lineno">12 </span><span class="o">>>></span> <span class="nb">print</span><span class="p">(</span><span class="n">lr_gd</span><span class="o">.</span><span class="n">coef</span><span class="p">)</span>
<span class="lineno">13 </span><span class="p">[[</span><span class="mf">6.09887817</span><span class="p">]</span>
<span class="lineno">14 </span> <span class="p">[</span><span class="mf">2.00028507</span><span class="p">]</span>
<span class="lineno">15 </span> <span class="p">[</span><span class="mf">2.99996673</span><span class="p">]]</span></code></pre></figure></div>
</div>
<p>The results show that our implementation of gradient descent update works well on the simulated dataset for linear regression.</p>
<appname>Application – Lasso solution of linear regression</appname>
<blockquote class="appquote">
<p>
However, when the loss function is non-differentiable, the vanilla gradient descent algorithm \eqref{eq:5_gradient_descent} cannot be used. In Chapter 5, we added the $L^2$ norm (also referred as Euclidean norm<sup class="footnote" id="fnr3"><a href="#fn3">3</a></sup>) of $\boldsymbol{\beta}$ to the loss function to get the ridge regression. What if we change the $L^2$ norm to $L^1$ norm in the loss function?
</p>
<p>$$<br />
\begin{equation}\label{eq:5_lasso}<br />
\min_{\boldsymbol{\hat\beta}} \quad \boldsymbol{e}’\boldsymbol{e} + \lambda \sum_{i=1}^p {|\boldsymbol{\beta}_i|},<br />
\end{equation}<br />
$$<br />
where $\lambda>0$.<br />
<p><br />
Solving the optimization problem specified in \eqref{eq:5_lasso}, we will get the Lasso solution of linear regression. Lasso is not a specific type of machine learning model. Actually, Lasso refers to least absolute shrinkage and selection operator. It is a method that performs both variable selection and regularization. What does variable selection mean? When we build up a machine learning model, we may collect as many data points (also called features, or independent variables) as possible. However, if only a subset of these data points are relevant for the task, we may need to select the subset explicitly for some reasons. First, it’s possible to reduce the cost (time or money) to collect/process these irrelevant variables. Second, adding irrelevant variables into the some machine learning models may impact the model performance. But why? Let’s take linear regression as an example. Including an irrelevant variable into a linear regression model is usually referred as model misspecification (Of course, omitting a relevant variable also results in a misspecified model). In theory, the estimator of coefficients is still unbiased ($\hat{\theta}$ is an unbiased estimator of $\theta$ if $E(\hat{\theta})=\theta$) when an irrelevant variable is included. But in practice, when the irrelevant variable has non-zero sample correlation coefficient with these relevant variables or the response variable $\boldsymbol{y}$, the estimate of coefficient may get changed. In addition, the estimator of coefficients are inefficient<sup class="footnote" id="fnr4"><a href="#fn4">4</a></sup>. Thus, it’s better to select the relevant variables for a linear regression.</p>
</p>
</blockquote>
<p>The Lasso solution is very important in machine learning because of its sparsity, which results in the automated variable selection. As for why the Lasso solution is sparse, the theoretical explanation could be found from the reference book<sup class="footnote" id="fnr5"><a href="#fn5">5</a></sup>. Let’s focus on how to solve it. Apparently, gradient descent algorithm cannot be applied directly to solve \eqref{eq:5_lasso} although it is still a convex optimization problem since the regularized loss function is non-differentiable because of $||$.</p>
<p>Proximal gradient One of the algorithms that can be used to solve \eqref{eq:5_lasso} is called proximal gradient descent<sup class="footnote" id="fnr6"><a href="#fn6">6</a></sup>. Let’s skip the theory of proximal methods and the tedious linear algebra, and jump to the solution directly. By proximal gradient descent, the update is given as follows in each iteration</p>
<p>$$<br />
\begin{equation}<br />
\begin{split}<br />
\boldsymbol{\beta}^{(k+1)} &= S_{\lambda \gamma}(\boldsymbol{\beta}^{(i)}-2\gamma \nabla{(\boldsymbol{e}’\boldsymbol{e} )})) \\<br />
&= S_{\lambda \gamma}(\boldsymbol{\beta}^{(i)}+2\gamma \boldsymbol{X}’(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}^{(i)})) ,<br />
\end{split}<br />
\label{eq:5_lasso_1}<br />
\end{equation}<br />
$$</p>
<p>where $S_{\theta}(\cdot)$ is called soft-thresholding operator given by</p>
<p>$$<br />
\begin{equation}\label{eq:sto}<br />
[S_{\theta}(\boldsymbol{z})]_i =\left\{\begin{array}{ll} \boldsymbol{z}\sb{i} – \theta & \boldsymbol{z}\sb{i}>\theta;\\<br />
0 & |\boldsymbol{z}\sb{i}|\leq\theta; \\<br />
\boldsymbol{z}\sb{i} + \theta & \boldsymbol{z}\sb{i}<-\theta.<br />
\end{array} \right.<br />
\end{equation}<br />
$$</p>
<p>Now we are ready to implement the Lasso solution of linear regression via \eqref{eq:sto}. Similar to ridge regression, we don’t want to put penalty on the intercept, and a natural estimate of the intercept is the mean of response, i.e., $\bar{\boldsymbol{y}}$. The learning rate $\gamma$ in each update could be fixed or adapted (change over iterations). In the implementation below, we use the largest eigenvalue of $\boldsymbol{X}’\boldsymbol{X}$ as the fixed learning rate.</p>
<language>R</language>
<p>chapter5/lasso.R<br />
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="kn">library</span><span class="p">(</span>R6<span class="p">)</span>
<span class="lineno"> 2 </span>
<span class="lineno"> 3 </span><span class="c1"># soft-thresholding operator</span>
<span class="lineno"> 4 </span>sto <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>z<span class="p">,</span> theta<span class="p">){</span>
<span class="lineno"> 5 </span> <span class="kp">sign</span><span class="p">(</span>z<span class="p">)</span><span class="o">*</span><span class="kp">pmax</span><span class="p">(</span><span class="kp">abs</span><span class="p">(</span>z<span class="p">)</span><span class="o">-</span><span class="kp">rep</span><span class="p">(</span>theta<span class="p">,</span><span class="kp">length</span><span class="p">(</span>z<span class="p">)),</span> <span class="m">0.0</span><span class="p">)</span>
<span class="lineno"> 6 </span><span class="p">}</span>
<span class="lineno"> 7 </span>
<span class="lineno"> 8 </span>Lasso <span class="o">=</span> R6Class<span class="p">(</span>
<span class="lineno"> 9 </span> <span class="s">"LassoLinearRegression"</span><span class="p">,</span>
<span class="lineno">10 </span> public <span class="o">=</span> <span class="kt">list</span><span class="p">(</span>
<span class="lineno">11 </span> intercept <span class="o">=</span> <span class="m">0</span><span class="p">,</span>
<span class="lineno">12 </span> beta <span class="o">=</span> <span class="kc">NULL</span><span class="p">,</span>
<span class="lineno">13 </span> lambda <span class="o">=</span> <span class="kc">NULL</span><span class="p">,</span>
<span class="lineno">14 </span> mu <span class="o">=</span> <span class="kc">NULL</span><span class="p">,</span>
<span class="lineno">15 </span> sd <span class="o">=</span> <span class="kc">NULL</span><span class="p">,</span>
<span class="lineno">16 </span> initialize <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>lambda<span class="p">)</span> <span class="p">{</span>
<span class="lineno">17 </span> self<span class="o">$</span>lambda <span class="o">=</span> lambda
<span class="lineno">18 </span> <span class="p">},</span>
<span class="lineno">19 </span> scale <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>x<span class="p">)</span> <span class="p">{</span>
<span class="lineno">20 </span> self<span class="o">$</span>mu <span class="o">=</span> <span class="kp">apply</span><span class="p">(</span>x<span class="p">,</span> <span class="m">2</span><span class="p">,</span> <span class="kp">mean</span><span class="p">)</span>
<span class="lineno">21 </span> self<span class="o">$</span>sd <span class="o">=</span> <span class="kp">apply</span><span class="p">(</span>x<span class="p">,</span> <span class="m">2</span><span class="p">,</span> <span class="kr">function</span><span class="p">(</span>e<span class="p">)</span> <span class="p">{</span>
<span class="lineno">22 </span> <span class="kp">sqrt</span><span class="p">((</span><span class="kp">length</span><span class="p">(</span>e<span class="p">)</span><span class="m">-1</span><span class="p">)</span> <span class="o">/</span> <span class="kp">length</span><span class="p">(</span>e<span class="p">))</span><span class="o">*</span>sd<span class="p">(</span>e<span class="p">)</span>
<span class="lineno">23 </span> <span class="p">})</span>
<span class="lineno">24 </span> <span class="p">},</span>
<span class="lineno">25 </span> transform <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>x<span class="p">)</span> <span class="kp">t</span><span class="p">((</span><span class="kp">t</span><span class="p">(</span>x<span class="p">)</span> <span class="o">-</span> self<span class="o">$</span>mu<span class="p">)</span> <span class="o">/</span> self<span class="o">$</span>sd<span class="p">),</span>
<span class="lineno">26 </span> fit <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>x<span class="p">,</span> y<span class="p">,</span> max_iter<span class="o">=</span><span class="m">100</span><span class="p">)</span> <span class="p">{</span>
<span class="lineno">27 </span> <span class="kr">if</span> <span class="p">(</span><span class="o">!</span><span class="kp">is.matrix</span><span class="p">(</span>x<span class="p">))</span> x <span class="o">=</span> <span class="kp">data.matrix</span><span class="p">(</span>x<span class="p">)</span>
<span class="lineno">28 </span> self<span class="o">$</span><span class="kp">scale</span><span class="p">(</span>x<span class="p">)</span>
<span class="lineno">29 </span> x_transformed <span class="o">=</span> self<span class="o">$</span><span class="kp">transform</span><span class="p">(</span>x<span class="p">)</span>
<span class="lineno">30 </span> self<span class="o">$</span>intercept <span class="o">=</span> <span class="kp">mean</span><span class="p">(</span>y<span class="p">)</span>
<span class="lineno">31 </span> y_centered <span class="o">=</span> y <span class="o">-</span> self<span class="o">$</span>intercept
<span class="lineno">32 </span> gamma <span class="o">=</span> <span class="m">1</span><span class="o">/</span><span class="p">(</span><span class="kp">eigen</span><span class="p">(</span><span class="kp">t</span><span class="p">(</span>x_transformed<span class="p">)</span> <span class="o">%*%</span> x_transformed<span class="p">,</span> only.values<span class="o">=</span><span class="kc">TRUE</span><span class="p">)</span><span class="o">$</span>values<span class="p">[</span><span class="m">1</span><span class="p">])</span>
<span class="lineno">33 </span> beta <span class="o">=</span> <span class="kp">rep</span><span class="p">(</span><span class="m">0</span><span class="p">,</span> <span class="kp">ncol</span><span class="p">(</span>x<span class="p">))</span>
<span class="lineno">34 </span> <span class="kr">for</span> <span class="p">(</span>i <span class="kr">in</span> <span class="m">1</span><span class="o">:</span>max_iter<span class="p">){</span>
<span class="lineno">35 </span> nabla <span class="o">=</span> <span class="o">-</span> <span class="kp">t</span><span class="p">(</span>x_transformed<span class="p">)</span> <span class="o">%*%</span> <span class="p">(</span>y_centered <span class="o">-</span> x_transformed <span class="o">%*%</span> <span class="kp">beta</span><span class="p">)</span>
<span class="lineno">36 </span> z <span class="o">=</span> beta <span class="o">-</span> <span class="m">2</span><span class="o">*</span><span class="kp">gamma</span><span class="o">*</span>nabla
<span class="lineno">37 </span> <span class="kp">print</span><span class="p">(</span>z<span class="p">)</span>
<span class="lineno">38 </span> beta <span class="o">=</span> sto<span class="p">(</span>z<span class="p">,</span> self<span class="o">$</span>lambda<span class="o">*</span><span class="kp">gamma</span><span class="p">)</span>
<span class="lineno">39 </span> <span class="p">}</span>
<span class="lineno">40 </span> self<span class="o">$</span>beta <span class="o">=</span> <span class="kp">beta</span>
<span class="lineno">41 </span> <span class="p">},</span>
<span class="lineno">42 </span> predict <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>new_x<span class="p">)</span> <span class="p">{</span>
<span class="lineno">43 </span> <span class="kr">if</span> <span class="p">(</span><span class="o">!</span><span class="kp">is.matrix</span><span class="p">(</span>new_x<span class="p">))</span> new_x <span class="o">=</span> <span class="kp">data.matrix</span><span class="p">(</span>new_x<span class="p">)</span>
<span class="lineno">44 </span> self<span class="o">$</span><span class="kp">transform</span><span class="p">(</span>new_x<span class="p">)</span> <span class="o">%*%</span> self<span class="o">$</span>beta <span class="o">+</span> self<span class="o">$</span>intercept
<span class="lineno">45 </span> <span class="p">}</span>
<span class="lineno">46 </span> <span class="p">)</span>
<span class="lineno">47 </span><span class="p">)</span></code></pre></figure></p>
<language>Python</language>
<p>chapter5/lasso.py<br />
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="lineno"> 2 </span><span class="kn">from</span> <span class="nn">sklearn.preprocessing</span> <span class="k">import</span> <span class="n">StandardScaler</span>
<span class="lineno"> 3 </span>
<span class="lineno"> 4 </span><span class="k">def</span> <span class="nf">sto</span><span class="p">(</span><span class="n">z</span><span class="p">,</span> <span class="n">theta</span><span class="p">):</span>
<span class="lineno"> 5 </span> <span class="k">return</span> <span class="n">np</span><span class="o">.</span><span class="n">sign</span><span class="p">(</span><span class="n">z</span><span class="p">)</span><span class="o">*</span><span class="n">np</span><span class="o">.</span><span class="n">maximum</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">abs</span><span class="p">(</span><span class="n">z</span><span class="p">)</span><span class="o">-</span><span class="n">np</span><span class="o">.</span><span class="n">full</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">z</span><span class="p">),</span><span class="n">theta</span><span class="p">),</span> <span class="mf">0.0</span><span class="p">)</span>
<span class="lineno"> 6 </span>
<span class="lineno"> 7 </span><span class="k">class</span> <span class="nc">Lasso</span><span class="p">:</span>
<span class="lineno"> 8 </span> <span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">l</span><span class="p">):</span>
<span class="lineno"> 9 </span> <span class="bp">self</span><span class="o">.</span><span class="n">l</span> <span class="o">=</span> <span class="n">l</span> <span class="c1"># l is lambda</span>
<span class="lineno">10 </span> <span class="bp">self</span><span class="o">.</span><span class="n">intercept</span> <span class="o">=</span> <span class="mf">0.0</span>
<span class="lineno">11 </span> <span class="bp">self</span><span class="o">.</span><span class="n">beta</span> <span class="o">=</span> <span class="kc">None</span>
<span class="lineno">12 </span> <span class="bp">self</span><span class="o">.</span><span class="n">scaler</span> <span class="o">=</span> <span class="n">StandardScaler</span><span class="p">()</span>
<span class="lineno">13 </span>
<span class="lineno">14 </span> <span class="k">def</span> <span class="nf">fit</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="n">max_iter</span><span class="o">=</span><span class="mi">100</span><span class="p">):</span>
<span class="lineno">15 </span> <span class="bp">self</span><span class="o">.</span><span class="n">intercept</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">mean</span><span class="p">(</span><span class="n">y</span><span class="p">)</span>
<span class="lineno">16 </span> <span class="n">y_centered</span> <span class="o">=</span> <span class="n">y</span><span class="o">-</span><span class="bp">self</span><span class="o">.</span><span class="n">intercept</span>
<span class="lineno">17 </span> <span class="n">x_transformed</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">scaler</span><span class="o">.</span><span class="n">fit_transform</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
<span class="lineno">18 </span> <span class="n">gamma</span> <span class="o">=</span> <span class="mf">1.0</span><span class="o">/</span><span class="n">np</span><span class="o">.</span><span class="n">linalg</span><span class="o">.</span><span class="n">eig</span><span class="p">(</span><span class="n">x_transformed</span><span class="o">.</span><span class="n">T</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">x_transformed</span><span class="p">))[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">max</span><span class="p">()</span>
<span class="lineno">19 </span> <span class="n">beta</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">zeros</span><span class="p">(</span><span class="n">x_transformed</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span>
<span class="lineno">20 </span> <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">max_iter</span><span class="p">):</span>
<span class="lineno">21 </span> <span class="n">nabla</span> <span class="o">=</span> <span class="o">-</span> <span class="n">np</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">x_transformed</span><span class="o">.</span><span class="n">T</span><span class="p">,</span> <span class="n">y_centered</span><span class="o">-</span><span class="n">np</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">x_transformed</span><span class="p">,</span> <span class="n">beta</span><span class="p">))</span>
<span class="lineno">22 </span> <span class="n">z</span> <span class="o">=</span> <span class="n">beta</span> <span class="o">-</span> <span class="mi">2</span><span class="o">*</span><span class="n">gamma</span><span class="o">*</span><span class="n">nabla</span>
<span class="lineno">23 </span> <span class="n">beta</span> <span class="o">=</span> <span class="n">sto</span><span class="p">(</span><span class="n">z</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">l</span><span class="o">*</span><span class="n">gamma</span><span class="p">)</span>
<span class="lineno">24 </span> <span class="nb">print</span><span class="p">(</span><span class="n">z</span><span class="p">)</span>
<span class="lineno">25 </span> <span class="bp">self</span><span class="o">.</span><span class="n">beta</span> <span class="o">=</span> <span class="n">beta</span>
<span class="lineno">26 </span>
<span class="lineno">27 </span> <span class="k">def</span> <span class="nf">predict</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">new_x</span><span class="p">):</span>
<span class="lineno">28 </span> <span class="n">new_x_transformed</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">scaler</span><span class="o">.</span><span class="n">transform</span><span class="p">(</span><span class="n">new_x</span><span class="p">)</span>
<span class="lineno">29 </span> <span class="k">return</span> <span class="n">np</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">new_x_transformed</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">beta</span><span class="p">)</span> <span class="o">+</span> <span class="bp">self</span><span class="o">.</span><span class="n">intercept</span></code></pre></figure></p>
<p>Now, let’s see the application of our own Lasso solution of linear regression on the Boston house-prices dataset.</p>
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="o">></span> <span class="kn">source</span><span class="p">(</span><span class="s">'lasso.R'</span><span class="p">)</span>
<span class="lineno"> 2 </span><span class="o">></span> <span class="kn">library</span><span class="p">(</span>MASS<span class="p">)</span>
<span class="lineno"> 3 </span><span class="o">></span> lr <span class="o">=</span> Lasso<span class="o">$</span>new<span class="p">(</span><span class="m">200</span><span class="p">)</span>
<span class="lineno"> 4 </span><span class="o">></span> lr<span class="o">$</span>fit<span class="p">(</span><span class="kp">data.matrix</span><span class="p">(</span>Boston<span class="p">[,</span><span class="o">-</span><span class="kp">ncol</span><span class="p">(</span>Boston<span class="p">)]),</span> Boston<span class="o">$</span>medv<span class="p">,</span> <span class="m">100</span><span class="p">)</span>
<span class="lineno"> 5 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span>lr<span class="o">$</span><span class="kp">beta</span><span class="p">)</span>
<span class="lineno"> 6 </span> <span class="p">[,</span><span class="m">1</span><span class="p">]</span>
<span class="lineno"> 7 </span>crim <span class="m">-0.34604592</span>
<span class="lineno"> 8 </span>zn <span class="m">0.38537301</span>
<span class="lineno"> 9 </span>indus <span class="m">-0.03155016</span>
<span class="lineno">10 </span>chas <span class="m">0.61871441</span>
<span class="lineno">11 </span>nox <span class="m">-1.08927715</span>
<span class="lineno">12 </span>rm <span class="m">2.96073304</span>
<span class="lineno">13 </span>age <span class="m">0.00000000</span>
<span class="lineno">14 </span>dis <span class="m">-1.74602943</span>
<span class="lineno">15 </span>rad <span class="m">0.02111245</span>
<span class="lineno">16 </span>tax <span class="m">0.00000000</span>
<span class="lineno">17 </span>ptratio <span class="m">-1.77697602</span>
<span class="lineno">18 </span>black <span class="m">0.67323937</span>
<span class="lineno">19 </span>lstat <span class="m">-3.71677103</span></code></pre></figure><language>Python</language>
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="o">>>></span> <span class="kn">from</span> <span class="nn">lasso</span> <span class="k">import</span> <span class="n">Lasso</span>
<span class="lineno"> 2 </span><span class="o">>>></span> <span class="kn">from</span> <span class="nn">sklearn.datasets</span> <span class="k">import</span> <span class="n">load_boston</span>
<span class="lineno"> 3 </span><span class="o">>>></span> <span class="n">boston</span> <span class="o">=</span> <span class="n">load_boston</span><span class="p">()</span>
<span class="lineno"> 4 </span><span class="o">>>></span> <span class="n">X</span><span class="p">,</span> <span class="n">y</span> <span class="o">=</span> <span class="n">boston</span><span class="o">.</span><span class="n">data</span><span class="p">,</span> <span class="n">boston</span><span class="o">.</span><span class="n">target</span>
<span class="lineno"> 5 </span><span class="o">>>></span> <span class="n">lr</span> <span class="o">=</span> <span class="n">Lasso</span><span class="p">(</span><span class="mf">200.0</span><span class="p">)</span>
<span class="lineno"> 6 </span><span class="o">>>></span> <span class="n">lr</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span><span class="n">X</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="n">max_iter</span><span class="o">=</span><span class="mi">100</span><span class="p">)</span>
<span class="lineno"> 7 </span><span class="o">>>></span> <span class="n">lr</span><span class="o">.</span><span class="n">beta</span>
<span class="lineno"> 8 </span><span class="n">array</span><span class="p">([</span><span class="o">-</span><span class="mf">0.34604592</span><span class="p">,</span> <span class="mf">0.38537301</span><span class="p">,</span> <span class="o">-</span><span class="mf">0.03155016</span><span class="p">,</span> <span class="mf">0.61871441</span><span class="p">,</span> <span class="o">-</span><span class="mf">1.08927715</span><span class="p">,</span>
<span class="lineno"> 9 </span> <span class="mf">2.96073304</span><span class="p">,</span> <span class="o">-</span><span class="mf">0.</span> <span class="p">,</span> <span class="o">-</span><span class="mf">1.74602943</span><span class="p">,</span> <span class="mf">0.02111245</span><span class="p">,</span> <span class="o">-</span><span class="mf">0.</span> <span class="p">,</span>
<span class="lineno">10 </span> <span class="o">-</span><span class="mf">1.77697602</span><span class="p">,</span> <span class="mf">0.67323937</span><span class="p">,</span> <span class="o">-</span><span class="mf">3.71677103</span><span class="p">])</span></code></pre></figure><p>In the example above, we set the $\lambda=200.0$ arbitrarily. The results show that the coefficients of <code>age</code> and <code>tax</code> are zero. In practice, the selection of $\lambda$ is usually done by cross-validation which would be introduced later in this book. If you have used the off-the-shelf Lasso solvers in R/Python you may wonder why the $\lambda$ used in this example is so big. One major reason is that in our implementation the first item in the loss function, i.e., $\boldsymbol{e}’\boldsymbol{e}$ is not scaled by the number of observations.</p>
<p>In practice, one challenge to apply Lasso regression is the selection of parameter $\lambda$. We will talk about the in Chapter 6 with more details. The basic idea is select the best $\lambda$ to minimize the prediction error on the unseen data.</p>
<h2 id="root">Root-finding</h2>
<p>There is a close relation between optimization and root-finding. To find the roots of $f(x)=0$, we may try to minimize $f^2(x)$ alternatively. Under specific conditions, to minimize $f(x)$ we may try to solve the root-finding problem of $f’(x)=0$ (e.g., see linear regression in Chapter 4). Various root-finding methods are available in both R and Python.</p>
<p>Let’s see an application of root-finding in Finance.</p>
<appname>Application – Internal rate of return</appname>
<blockquote class="appquote">
<p>Internal rate of return(<span class="caps">IRR</span>) is a measure to estimate the investment’s potential profitability. In fact, it is a discount rate which makes the net present value (<span class="caps">NPV</span>) of all future cashflows (positive for inflow and negative for outflow) equal to zero. Given the discount rate $r$, <span class="caps">NPV</span> is calculated as</p>
<p>$$<br />
\begin{equation}\label{eq:npv}<br />
\sum_{t=0}^T C_i/(1+r)^t,<br />
\end{equation}<br />
$$<br />
<p>where $C_i$ denotes the cashflow and $t$ denotes the duration from time 0. Thus, we can solve the <span class="caps">IRR</span> by find the root of <span class="caps">NPV</span>.</p>
</p>
</blockquote>
<p>Let’s try to solve <span class="caps">IRR</span> in R/Python.</p>
<language>R</language>
<p>chapter5/xirr.R<br />
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span>NPV <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>cashflow_dates<span class="p">,</span> cashflow_amounts<span class="p">,</span> discount_rate<span class="p">){</span>
<span class="lineno"> 2 </span> <span class="kp">stopifnot</span><span class="p">(</span><span class="kp">length</span><span class="p">(</span>cashflow_dates<span class="p">)</span> <span class="o">==</span> <span class="kp">length</span><span class="p">(</span>cashflow_amounts<span class="p">))</span>
<span class="lineno"> 3 </span> days_in_year <span class="o">=</span> <span class="m">365</span> <span class="c1"># we assume there are 365 days in each year</span>
<span class="lineno"> 4 </span> times<span class="o">=</span> <span class="kp">as.numeric</span><span class="p">(</span><span class="kp">difftime</span><span class="p">(</span>cashflow_dates<span class="p">,</span> cashflow_dates<span class="p">[</span><span class="m">1</span><span class="p">],</span> units<span class="o">=</span><span class="s">"days"</span><span class="p">))</span><span class="o">/</span>days_in_year
<span class="lineno"> 5 </span> <span class="kp">sum</span><span class="p">(</span>cashflow_amounts<span class="o">/</span><span class="p">(</span><span class="m">1.0</span><span class="o">+</span>discount_rate<span class="p">)</span><span class="o">^</span>times<span class="p">)</span>
<span class="lineno"> 6 </span><span class="p">}</span>
<span class="lineno"> 7 </span><span class="c1"># we name this function as 'xirr' to follow the convention of MS Excel</span>
<span class="lineno"> 8 </span>xirr <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>cashflow_dates<span class="p">,</span> cashflow_amounts<span class="p">){</span>
<span class="lineno"> 9 </span> <span class="c1"># we use uniroot function in R for root-finding</span>
<span class="lineno">10 </span> uniroot<span class="p">(</span>f<span class="o">=</span><span class="kr">function</span><span class="p">(</span>x<span class="p">)</span> NPV<span class="p">(</span>cashflow_dates<span class="p">,</span> cashflow_amounts<span class="p">,</span> x<span class="p">),</span> interval<span class="o">=</span> <span class="kt">c</span><span class="p">(</span><span class="m">-1</span><span class="p">,</span><span class="m">100</span><span class="p">))</span><span class="o">$</span>root
<span class="lineno">11 </span><span class="p">}</span></code></pre></figure></p>
<language>Python</language>
<p>chapter5/xirr.py<br />
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="kn">from</span> <span class="nn">datetime</span> <span class="k">import</span> <span class="n">datetime</span>
<span class="lineno"> 2 </span><span class="kn">from</span> <span class="nn">scipy.optimize</span> <span class="k">import</span> <span class="n">root</span>
<span class="lineno"> 3 </span><span class="k">def</span> <span class="nf">NPV</span><span class="p">(</span><span class="n">cashflow_dates</span><span class="p">,</span> <span class="n">cashflow_amounts</span><span class="p">,</span> <span class="n">discount_rate</span><span class="p">):</span>
<span class="lineno"> 4 </span> <span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="n">cashflow_dates</span><span class="p">)</span> <span class="o">==</span> <span class="nb">len</span><span class="p">(</span><span class="n">cashflow_amounts</span><span class="p">),</span> <span class="s2">"inconsistent lengths of cashflows dates and amounts"</span>
<span class="lineno"> 5 </span> <span class="n">days_in_year</span><span class="o">=</span><span class="mf">365.0</span>
<span class="lineno"> 6 </span> <span class="n">times</span> <span class="o">=</span> <span class="p">[(</span><span class="n">e</span> <span class="o">-</span> <span class="n">cashflow_dates</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span><span class="o">.</span><span class="n">days</span><span class="o">/</span><span class="n">days_in_year</span> <span class="k">for</span> <span class="n">e</span> <span class="ow">in</span> <span class="n">cashflow_dates</span><span class="p">]</span>
<span class="lineno"> 7 </span> <span class="k">return</span> <span class="nb">sum</span><span class="p">(</span><span class="n">cashflow_amounts</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="o">/</span><span class="p">(</span><span class="mf">1.0</span> <span class="o">+</span> <span class="n">discount_rate</span><span class="p">)</span><span class="o">**</span><span class="n">e</span> <span class="k">for</span> <span class="n">i</span><span class="p">,</span><span class="n">e</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">times</span><span class="p">))</span>
<span class="lineno"> 8 </span><span class="k">def</span> <span class="nf">xirr</span><span class="p">(</span><span class="n">cashflow_dates</span><span class="p">,</span> <span class="n">cashflow_amounts</span><span class="p">):</span>
<span class="lineno"> 9 </span> <span class="c1"># we use the scipy.optimize.root function to find the root</span>
<span class="lineno">10 </span> <span class="k">return</span> <span class="n">root</span><span class="p">(</span><span class="n">fun</span><span class="o">=</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span><span class="n">NPV</span><span class="p">(</span><span class="n">cashflow_dates</span><span class="p">,</span> <span class="n">cashflow_amounts</span><span class="p">,</span><span class="n">x</span><span class="p">),</span><span class="n">x0</span><span class="o">=</span><span class="p">[</span><span class="mf">0.0</span><span class="p">])</span><span class="o">.</span><span class="n">x</span></code></pre></figure></p>
<div class="codewrapper">
<div class="codeleft">
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno">1 </span><span class="o">></span> <span class="kn">source</span><span class="p">(</span><span class="s">'xirr.R'</span><span class="p">)</span>
<span class="lineno">2 </span><span class="o">></span> cashflow_dates <span class="o">=</span> <span class="kt">c</span><span class="p">(</span><span class="kp">as.Date</span><span class="p">(</span><span class="s">'2010-01-01'</span><span class="p">),</span> <span class="kp">as.Date</span><span class="p">(</span><span class="s">'2010-06-01'</span><span class="p">),</span> <span class="kp">as.Date</span><span class="p">(</span><span class="s">'2012-01-01'</span><span class="p">))</span>
<span class="lineno">3 </span><span class="o">></span> cashflow_amounts <span class="o">=</span> <span class="kt">c</span><span class="p">(</span><span class="m">-1000</span><span class="p">,</span> <span class="m">200</span><span class="p">,</span> <span class="m">1000</span><span class="p">)</span>
<span class="lineno">4 </span><span class="o">></span> xirr<span class="p">(</span>cashflow_dates<span class="p">,</span> cashflow_amounts<span class="p">)</span>
<span class="lineno">5 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">0.1120706</span></code></pre></figure></div>
<div class="coderight">
<language>Python</language>
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno">1 </span><span class="o">>>></span> <span class="kn">from</span> <span class="nn">xirr</span> <span class="k">import</span> <span class="n">xirr</span>
<span class="lineno">2 </span><span class="o">>>></span> <span class="kn">from</span> <span class="nn">datetime</span> <span class="k">import</span> <span class="n">datetime</span>
<span class="lineno">3 </span><span class="o">>>></span> <span class="n">cashflow_dates</span><span class="o">=</span><span class="p">[</span><span class="n">datetime</span><span class="p">(</span><span class="mi">2010</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">),</span> <span class="n">datetime</span><span class="p">(</span><span class="mi">2010</span><span class="p">,</span><span class="mi">6</span><span class="p">,</span><span class="mi">1</span><span class="p">),</span> <span class="n">datetime</span><span class="p">(</span><span class="mi">2012</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">)]</span>
<span class="lineno">4 </span><span class="o">>>></span> <span class="n">cashflow_amounts</span><span class="o">=</span><span class="p">[</span><span class="o">-</span><span class="mi">1000</span><span class="p">,</span> <span class="mi">200</span><span class="p">,</span> <span class="mi">1000</span><span class="p">]</span>
<span class="lineno">5 </span><span class="o">>>></span> <span class="n">xirr</span><span class="p">(</span><span class="n">cashflow_dates</span><span class="p">,</span> <span class="n">cashflow_amounts</span><span class="p">)</span>
<span class="lineno">6 </span><span class="n">array</span><span class="p">([</span><span class="mf">0.1120734</span><span class="p">])</span></code></pre></figure></div>
</div>
<p>Please note the R function <code>uniroot</code> works on one-dimensional functions only; while <code>scipy.optimize.root</code> function also works on multi-dimensional functions. <code>scipy.optimize.root</code> requires an initial guess for the solution, but <code>uniroot</code> does not. To find roots of multi-dimensional functions in R, the <code>optim</code> function introduced in the next Section can also be used. Also, the package <code>rootSolve</code> could be useful. There are other useful features from both functions, for example, we can control the tolerance for termination.</p>
<h2 id="tools">General purpose minimization tools in R/Python</h2>
<p>Implementing your own algorithms from scratch could be fun, but they may not be always efficient and stable. And we do that in this book for pedagogical purpose. For general purpose minimization problems, we can use the <code>optim</code> function in R and <code>scipy.optimize.minimize</code> function in Python as a black box. Various optimization algorithms have been implemented as the workhorse of these two functions. Some of these optimization algorithms (e.g., Quasi-Newton methods<sup class="footnote" id="fnr7"><a href="#fn7">7</a></sup>) require the gradients; and some are derivate-free (e.g., Nelder–Mead method<sup class="footnote" id="fnr8"><a href="#fn8">8</a></sup>). First, let’s see a simple example.</p>
<appname>Application – Maximum likelihood estimation of normal distribution</appname>
<blockquote class="appquote">
<p>Maximum likelihood estimation (<span class="caps">MLE</span>)<sup class="footnote" id="fnr9"><a href="#fn9">9</a></sup> is a very important topic in Statistics. I will give a very brief introduction what <span class="caps">MLE</span> is. To understand <span class="caps">MLE</span>, first we need to understand the likelihood function. Simply speaking, the likelihood function $L(\theta|{x})$ is a function of the unknown parameters $\theta$ to describe the probability or odds of obtaining the observed data ${x}$ ($x$ is a realization of random variable $X$. For example, when the random variable $X$ follows a continuous probability distribution with probability density function (pdf) $f_\theta$, $L(\theta|{x}) = f_{\theta}(x)$.</p>
<p>It’s often to observe a random sample consisting of multiple independent observations rather than a single observation. In that case, the likelihood function is</p>
<p>$$<br />
\begin{equation}<br />
\begin{split}<br />
L(\theta|x)=\prod_{i=1} ^{n} {f_{\theta}(x_i)}.<br />
\end{split}<br />
\label{eq:5_0}<br />
\end{equation}<br />
$$</p>
<p>
<p>Given the observed data and a model based on which the data are generated, we may minimize the corresponding likelihood function to estimate the model parameters. There are some nice properties of <span class="caps">MLE</span>, such as its consistency and efficiency. To better understand <span class="caps">MLE</span> and its properties, I recommend reading Statistical Inference<sup class="footnote" id="fnr10"><a href="#fn10">10</a></sup>.</p>
</p>
<p>
<p>In practice, it’s common to minimize the logarithm of the likelihood function, i.e., the log-likelihood function. When $X\sim\mathcal{N}(\mu,\sigma^2)$, the pdf of $X$ is given as below</p>
</p>
<p>$$<br />
\begin{equation}<br />
\begin{split}<br />
f(x|(\mu,\sigma^2))=\frac 1 {\sigma\sqrt{2\pi}}e^{-(x-\mu)^2/{2\sigma^2}}.<br />
\end{split}<br />
\label{eq:5_2}<br />
\end{equation}<br />
$$</p>
<p>Taking the logarithm, the log-likelihood function is equal to <br />
$$<br />
\begin{equation}<br />
\begin{split}<br />
\mathcal{L}(\theta|x)={-\frac n 2 \log(2\pi)- n \log\sigma – \frac 1 {2\sigma^2} \sum_{i=1}^{n}(x_i-\mu)^2}.<br />
\end{split}<br />
\label{eq:5_3}<br />
\end{equation}<br />
$$</p>
<p>Since the first item in \eqref{eq:5_3} is a constant, we can simply the log-likelihood function to</p>
<p>$$<br />
\begin{equation}<br />
\begin{split}<br />
\mathcal{L}(\theta|x)={- n \log\sigma- \frac 1 {2\sigma^2} \sum_{i=1}^{n}(x_i-\mu)^2}.<br />
\end{split}<br />
\label{eq:5_4}<br />
\end{equation}<br />
$$</p>
</blockquote>
<p>It’s worth noting \eqref{eq:5_4} is convex.</p>
<p>Let’s implement the <span class="caps">MLE</span> for normal distribution in R/Python.</p>
<language>R</language>
<p>chapter5/normal_mle.R<br />
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span>log_lik <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>theta<span class="p">,</span> x<span class="p">){</span>
<span class="lineno"> 2 </span> mu <span class="o">=</span> theta<span class="p">[</span><span class="m">1</span><span class="p">]</span>
<span class="lineno"> 3 </span> sigma <span class="o">=</span> theta<span class="p">[</span><span class="m">2</span><span class="p">]</span>
<span class="lineno"> 4 </span> <span class="c1"># we return the negative log-likelihood since optim is for minimization</span>
<span class="lineno"> 5 </span> <span class="o">-</span><span class="p">(</span><span class="o">-</span><span class="kp">length</span><span class="p">(</span>x<span class="p">)</span><span class="o">*</span><span class="kp">log</span><span class="p">(</span>sigma<span class="p">)</span> <span class="o">-</span> <span class="m">0.5</span><span class="o">/</span>sigma<span class="o">^</span><span class="m">2</span><span class="o">*</span><span class="kp">sum</span><span class="p">((</span>x<span class="o">-</span>mu<span class="p">)</span><span class="o">^</span><span class="m">2</span><span class="p">))</span>
<span class="lineno"> 6 </span><span class="p">}</span>
<span class="lineno"> 7 </span>
<span class="lineno"> 8 </span>normal_mle <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>x<span class="p">){</span>
<span class="lineno"> 9 </span> <span class="c1"># par=c(0,1) as the initial values</span>
<span class="lineno">10 </span> <span class="c1"># lower=c(-Inf, 1e-6) for the lower bounds</span>
<span class="lineno">11 </span> optim<span class="p">(</span>par<span class="o">=</span><span class="kt">c</span><span class="p">(</span><span class="m">0</span><span class="p">,</span> <span class="m">1</span><span class="p">),</span> fn<span class="o">=</span>log_lik<span class="p">,</span> x<span class="o">=</span>x<span class="p">,</span> lower<span class="o">=</span><span class="kt">c</span><span class="p">(</span><span class="o">-</span><span class="kc">Inf</span><span class="p">,</span> <span class="m">1e-6</span><span class="p">),</span> method<span class="o">=</span><span class="s">"L-BFGS-B"</span><span class="p">)</span><span class="o">$</span>par
<span class="lineno">12 </span><span class="p">}</span></code></pre></figure></p>
<language>Python</language>
<p>chapter5/normal_mle.py<br />
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="kn">from</span> <span class="nn">scipy.optimize</span> <span class="k">import</span> <span class="n">minimize</span>
<span class="lineno"> 2 </span><span class="kn">from</span> <span class="nn">numpy.random</span> <span class="k">import</span> <span class="n">normal</span>
<span class="lineno"> 3 </span><span class="kn">from</span> <span class="nn">math</span> <span class="k">import</span> <span class="n">log</span>
<span class="lineno"> 4 </span>
<span class="lineno"> 5 </span><span class="k">def</span> <span class="nf">log_lik</span><span class="p">(</span><span class="n">theta</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
<span class="lineno"> 6 </span> <span class="n">mu</span> <span class="o">=</span> <span class="n">theta</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="lineno"> 7 </span> <span class="n">sigma</span> <span class="o">=</span> <span class="n">theta</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="lineno"> 8 </span> <span class="c1"># return the negative log-likelihood for minimization </span>
<span class="lineno"> 9 </span> <span class="k">return</span> <span class="o">-</span><span class="p">(</span><span class="o">-</span><span class="nb">len</span><span class="p">(</span><span class="n">x</span><span class="p">)</span><span class="o">*</span><span class="n">log</span><span class="p">(</span><span class="n">sigma</span><span class="p">)</span> <span class="o">-</span> <span class="mf">0.5</span><span class="o">/</span><span class="n">sigma</span><span class="o">**</span><span class="mi">2</span><span class="o">*</span><span class="nb">sum</span><span class="p">((</span><span class="n">e</span><span class="o">-</span><span class="n">mu</span><span class="p">)</span><span class="o">**</span><span class="mi">2</span> <span class="k">for</span> <span class="n">e</span> <span class="ow">in</span> <span class="n">x</span><span class="p">))</span>
<span class="lineno">10 </span>
<span class="lineno">11 </span><span class="k">def</span> <span class="nf">normal_mle</span><span class="p">(</span><span class="n">x</span><span class="p">):</span>
<span class="lineno">12 </span> <span class="k">return</span> <span class="n">minimize</span><span class="p">(</span><span class="n">fun</span><span class="o">=</span><span class="k">lambda</span> <span class="n">e</span><span class="p">:</span><span class="n">log_lik</span><span class="p">(</span><span class="n">e</span><span class="p">,</span><span class="n">x</span><span class="p">),</span><span class="n">x0</span><span class="o">=</span><span class="p">[</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">],</span><span class="n">method</span><span class="o">=</span><span class="s1">'L-BFGS-B'</span><span class="p">,</span><span class="n">bounds</span><span class="o">=</span><span class="p">((</span><span class="kc">None</span><span class="p">,</span><span class="kc">None</span><span class="p">),(</span><span class="mf">1e-6</span><span class="p">,</span><span class="kc">None</span><span class="p">)))</span><span class="o">.</span><span class="n">x</span></code></pre></figure></p>
<p>There is no bound set for the mean parameter $\mu$, while we need set the lower bound for the standard deviation $\sigma$. We chose ‘L-<span class="caps">BFGS</span>-B’ as the optimization algorithm in this example, which requires the gradient of the objective function. When the gradient function is not provided, the gradient is estimated in the numeric approach. In general, providing the gradient function may speed up. ‘L-<span class="caps">BFGS</span>-B’ is a quasi-Newton method. Newton method requires the Hessian matrix for optimization, but the calculation of Hessian matrix may be expensive or even unavailable in some cases. The quasi-Newton methods approximate the Hessian matrix of the objective function instead of calculation. Now let’s use these functions to estimate the parameters of normal distribution.</p>
<div class="codewrapper">
<div class="codeleft">
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno">1 </span><span class="o">></span> <span class="kn">source</span><span class="p">(</span><span class="s">'normal_mle.R'</span><span class="p">)</span>
<span class="lineno">2 </span><span class="o">></span> n<span class="o">=</span><span class="m">100</span> <span class="c1"># generate a random sample with size = 100</span>
<span class="lineno">3 </span><span class="o">></span> <span class="kp">set.seed</span><span class="p">(</span><span class="m">42</span><span class="p">)</span>
<span class="lineno">4 </span><span class="o">></span> x<span class="o">=</span>rnorm<span class="p">(</span>n<span class="p">,</span> mean<span class="o">=</span><span class="m">1.5</span><span class="p">,</span> sd<span class="o">=</span><span class="m">0.8</span><span class="p">)</span>
<span class="lineno">5 </span><span class="o">></span> normal_mle<span class="p">(</span>x<span class="p">)</span>
<span class="lineno">6 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">1.5260111</span> <span class="m">0.8289101</span></code></pre></figure></div>
<div class="coderight">
<language>Python</language>
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno">1 </span><span class="o">>>></span> <span class="kn">from</span> <span class="nn">numpy.random</span> <span class="k">import</span> <span class="n">seed</span><span class="p">,</span> <span class="n">normal</span>
<span class="lineno">2 </span><span class="o">>>></span> <span class="kn">from</span> <span class="nn">normal_mle</span> <span class="k">import</span> <span class="n">normal_mle</span><span class="o">.</span><span class="n">R</span>
<span class="lineno">3 </span><span class="o">>>></span> <span class="n">seed</span><span class="p">(</span><span class="mi">42</span><span class="p">)</span>
<span class="lineno">4 </span><span class="o">>>></span> <span class="n">n</span><span class="o">=</span><span class="mi">100</span>
<span class="lineno">5 </span><span class="o">>>></span> <span class="n">x</span><span class="o">=</span><span class="n">normal</span><span class="p">(</span><span class="mf">1.5</span><span class="p">,</span> <span class="mf">0.8</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span>
<span class="lineno">6 </span><span class="o">>>></span> <span class="n">normal_mle</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
<span class="lineno">7 </span><span class="n">array</span><span class="p">([</span><span class="mf">1.41692178</span><span class="p">,</span> <span class="mf">0.72289325</span><span class="p">])</span></code></pre></figure></div>
</div>
<p>Let’s see another application of the general purpose minimization – logistic regression.</p>
<appname>Application – Logistic regression</appname>
<blockquote class="appquote">
<p>
<p>We have seen how linear regression works in Chapter 4. Logistic regression works similarly to linear regression. The major difference is that logistic regression is used in classification problem. Classification means to identify to which of a set of categories a new observation belongs. Among classification problems, if each observation belongs to one of two disjoint categories, it is a binary classification problem. Binary classification has lots of real-world applications, such as spam email filter, loan default prediction, and sentiment analysis.</p>
</p>
<p>
<p>In logistic regression, the prediction output of an observation is the probability that the observation falls into a specific category. Let $\boldsymbol{X}$ represent a $n\times (p+1)$ matrix ($n>p$) of independent variables (predictor) with constant vector $\boldsymbol{1}$ in the first column, and $\boldsymbol{y}$ represent a column vector of the response variable (target). Please note $\boldsymbol{y}$ is a binary vector, i.e., $\boldsymbol{y}_i\in{\{0,1\}}$. Formally, a logistic regression is specified as</p>
</p>
<p>$$<br />
\begin{equation}<br />
\begin{split}<br />
Pr(\boldsymbol{y}_i=1|\boldsymbol{x}_i,\boldsymbol{\beta})=\frac 1 {1+e^{-\boldsymbol{x}’_i\boldsymbol{\beta}}},<br />
\end{split}<br />
\label{eq:5_5}<br />
\end{equation}<br />
$$</p>
<p>
<p>where $\boldsymbol{x}_i=[1,\boldsymbol{X}_{1i},\boldsymbol{X}_{2i},…,\boldsymbol{X}_{pi}]’$, and $\boldsymbol{\beta}$ denotes the parameter of the logistic model.</p>
</p>
<p>
<p>What does \eqref{eq:5_5} mean? It implies the response variable $\boldsymbol{y}_i$ given $\boldsymbol{x}_i$ and $\boldsymbol{\beta}$ follows a Bernoulli distribution. More specifically, $\boldsymbol{y}_i\sim{Bern(1/( 1+\exp(-\boldsymbol{x}’_i\boldsymbol{\beta}) ))}$. Based on the assumption that the observations are independent, the log-likelihood function is</p>
</p>
<p>$$<br />
\begin{equation}<br />
\begin{split}<br />
&\mathcal{L}(\beta|\boldsymbol{X},\boldsymbol{y}) \\<br />
&=\log{ \prod_{i=1}^{n} {\Big( {Pr(\boldsymbol{y}_i=1|\boldsymbol{x}_i,\boldsymbol{\beta})}^{\boldsymbol{y}_i } {Pr(\boldsymbol{y}_i=0|\boldsymbol{x}_i,\boldsymbol{\beta})}^{1-\boldsymbol{y}_i} \Big)}}\\<br />
&= \sum_{i=1}^{n} {\Big( \boldsymbol{y}_i \log{Pr(\boldsymbol{y}_i=1|\boldsymbol{x}_i,\boldsymbol{\beta})} + (1-\boldsymbol{y}_i) \log{Pr(\boldsymbol{y}_i=0|\boldsymbol{x}_i,\boldsymbol{\beta})} \Big)}\\<br />
&= \sum_{i=1} ^{n} {\Big(\boldsymbol{y}_i\boldsymbol{x}’_i\boldsymbol{\beta} – \log(1+e^{\boldsymbol{x}’_i\boldsymbol{\beta}})\Big)}.<br />
\end{split}<br />
\label{eq:5_6}<br />
\end{equation}<br />
$$</p>
</blockquote>
<p>Given the log-likelihood function, we can get the maximum likelihood estimate of logistic regression by minimizing \eqref{eq:5_6} which is also convex. The minimization can be done similarly to linear regression via iteratively re-weighted least square method (<span class="caps">IRLS</span>)<sup class="footnote" id="fnr5"><a href="#fn5">5</a></sup>. However, in this Section we will not use the <span class="caps">IRLS</span> method and instead let’s use the <code>optim</code> function and <code>scipy.optimize.minimize</code> function in R and Python, respectively.</p>
<language>R</language>
<p>chapter5/logistic_regression.R<br />
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="kn">library</span><span class="p">(</span>R6<span class="p">)</span>
<span class="lineno"> 2 </span>
<span class="lineno"> 3 </span>LogisticRegression <span class="o">=</span> R6Class<span class="p">(</span>
<span class="lineno"> 4 </span> <span class="s">"LogisticRegression"</span><span class="p">,</span>
<span class="lineno"> 5 </span> public <span class="o">=</span> <span class="kt">list</span><span class="p">(</span>
<span class="lineno"> 6 </span> coef <span class="o">=</span> <span class="kc">NULL</span><span class="p">,</span>
<span class="lineno"> 7 </span> initialize <span class="o">=</span> <span class="kr">function</span><span class="p">()</span> <span class="p">{</span>
<span class="lineno"> 8 </span> <span class="p">},</span>
<span class="lineno"> 9 </span> sigmoid <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>x<span class="p">)</span> <span class="p">{</span>
<span class="lineno">10 </span> <span class="m">1.0</span><span class="o">/</span><span class="p">(</span><span class="m">1.0</span> <span class="o">+</span> <span class="kp">exp</span><span class="p">(</span><span class="o">-</span>x<span class="p">))</span>
<span class="lineno">11 </span> <span class="p">},</span>
<span class="lineno">12 </span> log_lik <span class="o">=</span> <span class="kr">function</span><span class="p">(</span><span class="kp">beta</span><span class="p">,</span> x<span class="p">,</span> y<span class="p">)</span> <span class="p">{</span>
<span class="lineno">13 </span> linear <span class="o">=</span> x <span class="o">%*%</span> <span class="kp">beta</span>
<span class="lineno">14 </span> ll <span class="o">=</span> <span class="kp">sum</span><span class="p">(</span>linear <span class="o">*</span> y<span class="p">)</span> <span class="o">-</span> <span class="kp">sum</span><span class="p">(</span><span class="kp">log</span><span class="p">(</span><span class="m">1</span> <span class="o">+</span> <span class="kp">exp</span><span class="p">(</span>linear<span class="p">)))</span>
<span class="lineno">15 </span> <span class="kr">return</span><span class="p">(</span><span class="o">-</span>ll<span class="p">)</span> <span class="c1"># return negative log-likelihood</span>
<span class="lineno">16 </span> <span class="p">},</span>
<span class="lineno">17 </span> fit <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>x<span class="p">,</span> y<span class="p">)</span> <span class="p">{</span>
<span class="lineno">18 </span> <span class="kr">if</span> <span class="p">(</span><span class="o">!</span><span class="kp">is.matrix</span><span class="p">(</span>x<span class="p">))</span> x <span class="o">=</span> <span class="kp">data.matrix</span><span class="p">(</span>x<span class="p">)</span>
<span class="lineno">19 </span> self<span class="o">$</span>coef <span class="o">=</span> optim<span class="p">(</span>par<span class="o">=</span><span class="kp">rep</span><span class="p">(</span><span class="m">0</span><span class="p">,</span> <span class="m">1</span><span class="o">+</span><span class="kp">ncol</span><span class="p">(</span>x<span class="p">)),</span> fn<span class="o">=</span>self<span class="o">$</span>log_lik<span class="p">,</span> x<span class="o">=</span><span class="kp">cbind</span><span class="p">(</span><span class="m">1</span><span class="p">,</span> x<span class="p">),</span> y<span class="o">=</span>y<span class="p">,</span> method<span class="o">=</span><span class="s">"L-BFGS-B"</span><span class="p">)</span><span class="o">$</span>par
<span class="lineno">20 </span> <span class="p">},</span>
<span class="lineno">21 </span> predict <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>new_x<span class="p">)</span> <span class="p">{</span>
<span class="lineno">22 </span> <span class="kr">if</span> <span class="p">(</span><span class="o">!</span><span class="kp">is.matrix</span><span class="p">(</span>new_x<span class="p">))</span> new_x <span class="o">=</span> <span class="kp">data.matrix</span><span class="p">(</span>new_x<span class="p">)</span>
<span class="lineno">23 </span> linear <span class="o">=</span> <span class="kp">cbind</span><span class="p">(</span><span class="m">1</span><span class="p">,</span> new_x<span class="p">)</span> <span class="o">%*%</span> self<span class="o">$</span>coef
<span class="lineno">24 </span> self<span class="o">$</span>sigmoid<span class="p">(</span>linear<span class="p">)</span>
<span class="lineno">25 </span> <span class="p">}</span>
<span class="lineno">26 </span> <span class="p">)</span>
<span class="lineno">27 </span><span class="p">)</span></code></pre></figure></p>
<language>Python</language>
<p>chapter5/logistic_regression.py<br />
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="lineno"> 2 </span><span class="kn">from</span> <span class="nn">scipy.optimize</span> <span class="k">import</span> <span class="n">minimize</span>
<span class="lineno"> 3 </span>
<span class="lineno"> 4 </span><span class="k">class</span> <span class="nc">LogisticRegression</span><span class="p">:</span>
<span class="lineno"> 5 </span> <span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="lineno"> 6 </span> <span class="bp">self</span><span class="o">.</span><span class="n">coef</span> <span class="o">=</span> <span class="kc">None</span>
<span class="lineno"> 7 </span>
<span class="lineno"> 8 </span> <span class="c1"># we use '_' as the prefix of the method name to indicate the method is private</span>
<span class="lineno"> 9 </span> <span class="k">def</span> <span class="nf">_sigmoid</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
<span class="lineno">10 </span> <span class="k">return</span> <span class="mf">1.0</span><span class="o">/</span><span class="p">(</span><span class="mi">1</span> <span class="o">+</span> <span class="n">np</span><span class="o">.</span><span class="n">exp</span><span class="p">(</span><span class="o">-</span><span class="n">x</span><span class="p">))</span>
<span class="lineno">11 </span>
<span class="lineno">12 </span> <span class="k">def</span> <span class="nf">_log_lik</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">beta</span><span class="p">,</span> <span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">):</span>
<span class="lineno">13 </span> <span class="n">linear</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">beta</span><span class="p">)</span>
<span class="lineno">14 </span> <span class="n">ll</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">sum</span><span class="p">(</span><span class="n">linear</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">y</span><span class="p">))</span> <span class="o">-</span> <span class="n">np</span><span class="o">.</span><span class="n">sum</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">log</span><span class="p">(</span><span class="mf">1.0</span><span class="o">+</span><span class="n">np</span><span class="o">.</span><span class="n">exp</span><span class="p">(</span><span class="n">linear</span><span class="p">)))</span>
<span class="lineno">15 </span> <span class="k">return</span> <span class="o">-</span><span class="n">ll</span>
<span class="lineno">16 </span>
<span class="lineno">17 </span> <span class="k">def</span> <span class="nf">fit</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">):</span>
<span class="lineno">18 </span> <span class="bp">self</span><span class="o">.</span><span class="n">coef</span> <span class="o">=</span> <span class="n">minimize</span><span class="p">(</span><span class="n">fun</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">_log_lik</span><span class="p">,</span> <span class="n">args</span><span class="o">=</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">insert</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mf">1.0</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">),</span><span class="n">y</span><span class="p">),</span> <span class="n">method</span><span class="o">=</span><span class="s1">'L-BFGS-B'</span><span class="p">,</span> <span class="n">x0</span><span class="o">=</span><span class="n">np</span><span class="o">.</span><span class="n">zeros</span><span class="p">(</span><span class="mi">1</span><span class="o">+</span><span class="n">x</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]))</span><span class="o">.</span><span class="n">x</span>
<span class="lineno">19 </span>
<span class="lineno">20 </span> <span class="k">def</span> <span class="nf">predict</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">new_x</span><span class="p">):</span>
<span class="lineno">21 </span> <span class="n">linear</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">insert</span><span class="p">(</span><span class="n">new_x</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mf">1.0</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span> <span class="c1"># add 1 to new_x</span>
<span class="lineno">22 </span> <span class="k">return</span> <span class="bp">self</span><span class="o">.</span><span class="n">_sigmoid</span><span class="p">(</span><span class="n">linear</span><span class="p">)</span></code></pre></figure></p>
<p>Now let’s see how to use our own logistic regression. We use the banknote dataset<sup class="footnote" id="fnr11"><a href="#fn11">11</a></sup> in this example. In the banknote dataset, there are four different predictors extracted from the image of a banknote, which are used to predict if a banknote is genuine and forged.</p>
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="o">></span> <span class="kn">library</span><span class="p">(</span>Metrics<span class="p">)</span>
<span class="lineno"> 2 </span><span class="o">></span> banknote_data <span class="o">=</span> read.csv<span class="p">(</span><span class="s">'data_banknote_authentication.txt'</span><span class="p">,</span> header <span class="o">=</span> <span class="kc">FALSE</span><span class="p">)</span>
<span class="lineno"> 3 </span><span class="o">></span> <span class="kp">colnames</span><span class="p">(</span>banknote_data<span class="p">)</span> <span class="o">=</span> <span class="kt">c</span><span class="p">(</span><span class="s">'variance'</span><span class="p">,</span> <span class="s">'skewness'</span><span class="p">,</span> <span class="s">'curtosis'</span><span class="p">,</span> <span class="s">'entropy'</span><span class="p">,</span> <span class="s">'class'</span><span class="p">)</span>
<span class="lineno"> 4 </span><span class="o">></span>
<span class="lineno"> 5 </span><span class="o">></span> lr<span class="o">=</span>LogisticRegression<span class="o">$</span>new<span class="p">()</span>
<span class="lineno"> 6 </span><span class="o">></span> lr<span class="o">$</span>fit<span class="p">(</span>banknote_data<span class="p">[,</span><span class="m">1</span><span class="o">:</span><span class="m">4</span><span class="p">],</span> banknote_data<span class="o">$</span><span class="kp">class</span><span class="p">)</span>
<span class="lineno"> 7 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span>lr<span class="o">$</span>coef<span class="p">)</span>
<span class="lineno"> 8 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">7.3216635</span> <span class="m">-7.8590497</span> <span class="m">-4.1907866</span> <span class="m">-5.2872185</span> <span class="m">-0.6052346</span>
<span class="lineno"> 9 </span><span class="o">></span> <span class="kp">print</span><span class="p">(</span>lr<span class="o">$</span>predict<span class="p">(</span>banknote_data<span class="p">[,</span><span class="m">1</span><span class="o">:</span><span class="m">4</span><span class="p">])[</span><span class="m">1</span><span class="o">:</span><span class="m">5</span><span class="p">])</span> <span class="c1"># see the first 5 predictions</span>
<span class="lineno">10 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">4.071505e-19</span> <span class="m">6.741755e-22</span> <span class="m">2.187781e-10</span> <span class="m">1.604733e-16</span>
<span class="lineno">11 </span><span class="p">[</span><span class="m">5</span><span class="p">]</span> <span class="m">4.579230e-01</span></code></pre></figure><language>Python</language>
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="o">>>></span> <span class="kn">from</span> <span class="nn">sklearn.metrics</span> <span class="k">import</span> <span class="n">log_loss</span>
<span class="lineno"> 2 </span><span class="o">>>></span> <span class="kn">from</span> <span class="nn">logistic_regression</span> <span class="k">import</span> <span class="n">LogisticRegression</span>
<span class="lineno"> 3 </span><span class="o">>>></span> <span class="n">banknote_data</span><span class="o">=</span><span class="n">pd</span><span class="o">.</span><span class="n">read_csv</span><span class="p">(</span><span class="s1">'data_banknote_authentication.txt'</span><span class="p">,</span> <span class="n">header</span><span class="o">=</span><span class="kc">None</span><span class="p">)</span>
<span class="lineno"> 4 </span><span class="o">>>></span> <span class="kn">from</span> <span class="nn">logistic_regression</span> <span class="k">import</span> <span class="n">LogisticRegression</span>
<span class="lineno"> 5 </span><span class="o">>>></span> <span class="n">lr</span><span class="o">=</span><span class="n">LogisticRegression</span><span class="p">()</span>
<span class="lineno"> 6 </span><span class="o">>>></span> <span class="n">lr</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span><span class="n">banknote_data</span><span class="o">.</span><span class="n">iloc</span><span class="p">[:,</span><span class="mi">0</span><span class="p">:</span><span class="mi">4</span><span class="p">]</span><span class="o">.</span><span class="n">values</span><span class="p">,</span> <span class="n">banknote_data</span><span class="o">.</span><span class="n">iloc</span><span class="p">[:,</span><span class="mi">4</span><span class="p">])</span>
<span class="lineno"> 7 </span><span class="o">>>></span> <span class="nb">print</span><span class="p">(</span><span class="n">lr</span><span class="o">.</span><span class="n">coef</span><span class="p">)</span>
<span class="lineno"> 8 </span><span class="p">[</span> <span class="mf">7.32157876</span> <span class="o">-</span><span class="mf">7.8591774</span> <span class="o">-</span><span class="mf">4.19090781</span> <span class="o">-</span><span class="mf">5.28734934</span> <span class="o">-</span><span class="mf">0.60536929</span><span class="p">]</span>
<span class="lineno"> 9 </span><span class="o">>>></span> <span class="n">lr</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span><span class="n">banknote_data</span><span class="o">.</span><span class="n">iloc</span><span class="p">[:,</span><span class="mi">0</span><span class="p">:</span><span class="mi">4</span><span class="p">]</span><span class="o">.</span><span class="n">values</span><span class="p">)[:</span><span class="mi">5</span><span class="p">]</span> <span class="c1"># predict the probabilities of y=1</span>
<span class="lineno">10 </span><span class="n">array</span><span class="p">([</span><span class="mf">4.06674238e-19</span><span class="p">,</span> <span class="mf">6.73409431e-22</span><span class="p">,</span> <span class="mf">2.18663270e-10</span><span class="p">,</span> <span class="mf">1.60365564e-16</span><span class="p">,</span>
<span class="lineno">11 </span> <span class="mf">4.57910123e-01</span><span class="p">])</span></code></pre></figure><p>The above implementation of logistic regression works well. But it is only a pedagogical toy to illustrate what a logistic regression is and how it can be solved with general purpose optimization tool. In practice, there are fast and stable off-the-shelf software tools to choose. It’s also worth noting most of the iterative optimization algorithms allow to specify the max number of iterations as well as the tolerance. Sometimes it’s helpful to tune these parameters.</p>
<p>In practice, the penalized ($L^1,L^2$) versions of logistic regression are more commonly-used than the vanilla logistic regression we introduced above.</p>
<h2 id="lp">Linear programming</h2>
<p>In linear programming (LP)<sup class="footnote" id="fnr12"><a href="#fn12">12</a></sup> both the objective function and the constraints are linear. LP can be expressed as follows,</p>
<p>$$<br />
\begin{equation}<br />
\begin{split}<br />
&\max_{\boldsymbol{x}} {\boldsymbol{c}’\boldsymbol{x}}\\<br />
&subject\quad to\quad \boldsymbol{Ax}\leq{\boldsymbol{b}},<br />
\end{split}<br />
\label{eq:5_1}<br />
\end{equation}<br />
$$</p>
<p>where the vector $\boldsymbol{x}$ represents the decision variable; $\boldsymbol{A}$ is a matrix, $\boldsymbol{b}$ and $\boldsymbol{c}$ are vectors. We can do minimization instead of maximization in practice. Also, it’s completely valid to have equality constraints for LP problems. All LP problems can be converted into the form of \eqref{eq:5_1}. For example, the equality constraint $\boldsymbol{Cx}=\boldsymbol{d}$ can be transformed to inequality constraints $\boldsymbol{Cx}\leq\boldsymbol{d}$ and $\boldsymbol{-Cx}\leq-\boldsymbol{d}$.</p>
<p>Every LP problem falls into three categories:</p>
<ul>
<li>infeasible – no solution that satisfies all constraints exists;</li>
<li>unbounded – for every feasible solution, there exists another feasible solution that improves the objective function;</li>
<li>having an optimal solution – a unique maximum (or minimum) value of the objective function exists.</li>
</ul>
<p>When a LP problem has an optimal solution, there might be different sets of decision variables that lead to the same optimal value of the objective function. LP problems are also convex problems<sup class="footnote" id="fnr2"><a href="#fn2">2</a></sup>. The algorithms to solve LP problems have been well-studied. The basic and popular algorithm is Dantzig’s simplex method which was invented in 1947<sup class="footnote" id="fnr13"><a href="#fn13">13</a></sup>. More recent Interior point methods have better worst case complexity than the simplex method. In R/Python, we can find packages that implement both algorithms.</p>
<p>LP has been extensively used in business, economics, and some engineering problems. First, let’s see a simple application of LP in Let’s see an application of LP in regression analysis.</p>
<appname>Application – Portfolio construction</appname>
<blockquote class="appquote">
<p>
<p>Let’s assume there are 4 different investment opportunities, A, B, C, and D, listed in the table below. There is no intermediate payment between the vintage year and the maturity year. Payments received on maturity can be reinvested. We also assume the risk-free rate is 0. The goal is to construct a portfolio using these investment opportunities with $10,000 at year 1 so that the net asset value at year 5 is maximized.</p>
</p>
<figcaption class="centerfigcaption">Available investment opportunities</figcaption>
<table>
<tr>
<td> <strong>investment</strong> </td>
<td> <strong>vintage</strong> </td>
<td> <strong>maturity</strong> </td>
<td> <strong>rate of return (annualized)</strong> </td>
</tr>
<tr>
<td> <strong>A</strong> </td>
<td> year 1 </td>
<td> year 2 </td>
<td> 0.08 </td>
</tr>
<tr>
<td> <strong>B</strong> </td>
<td> year 2 </td>
<td> year 5 </td>
<td> 0.12 </td>
</tr>
<tr>
<td> <strong>C</strong> </td>
<td> year 3 </td>
<td> year 5 </td>
<td> 0.16 </td>
</tr>
<tr>
<td> <strong>D</strong> </td>
<td> year 1 </td>
<td> year 4 </td>
<td> 0.14 </td>
</tr>
</table>
<p>
<p>Let $x_1$, $x_2$, $x_3$ and $x_4$ denote the amount to invest in each investment opportunity; and let $x_0$ denote the cash not invested in year 1. The problem can be formulated as</p>
</p>
<p>$$<br />
\begin{equation}<br />
\begin{split}<br />
\max_{\boldsymbol{x}} \quad & \boldsymbol{x}\sb{0}+\boldsymbol{x}\sb{1}{(1+0.08)} – \boldsymbol{x}\sb{2} – \boldsymbol{x}\sb{3} + \\<br />
\boldsymbol{x}\sb{2}{(1+0.12)^3} + & \boldsymbol{x}\sb{3}{(1+0.16)^2} + \boldsymbol{x}\sb{4}{(1+0.14)^3} \\<br />
subject\quad to \quad\quad & \boldsymbol{x}\sb{0} + \boldsymbol{x}\sb{1} + \boldsymbol{x}\sb{4} = 10000 ;\\<br />
& \boldsymbol{x}\sb{0} + \boldsymbol{x}\sb{1}(1+0.08)\geq\boldsymbol{x}\sb{2} + \boldsymbol{x}\sb{3} ; \\<br />
& \boldsymbol{x}\sb{i} \geq0;i=0,…,4.<br />
\end{split}<br />
\label{eq:linear_investment}<br />
\end{equation}<br />
$$</p>
</blockquote>
<p>In Python, there are quite a few tools that can be used to solve LP, such as <code>ortools</code><sup class="footnote" id="fnr14"><a href="#fn14">14</a></sup>, <code>scipy.optimize.linprog</code><sup class="footnote" id="fnr15"><a href="#fn15">15</a></sup>, and <code>CVXPY</code><sup class="footnote" id="fnr16"><a href="#fn16">16</a></sup>. We will show the solution using <code>ortools</code> in Python and <code>lpSolve</code><sup class="footnote" id="fnr17"><a href="#fn17">17</a></sup> in R.</p>
<language>R</language>
<p>chapter5/portfolio_construction.R<br />
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="kn">library</span><span class="p">(</span>lpSolve<span class="p">)</span>
<span class="lineno"> 2 </span>
<span class="lineno"> 3 </span>total <span class="o">=</span> <span class="m">10000.0</span>
<span class="lineno"> 4 </span>rate_of_return <span class="o">=</span> <span class="kt">c</span><span class="p">(</span><span class="m">0</span><span class="p">,</span> <span class="m">0.08</span><span class="p">,</span> <span class="m">0.12</span><span class="p">,</span> <span class="m">0.16</span><span class="p">,</span> <span class="m">0.14</span><span class="p">)</span>
<span class="lineno"> 5 </span><span class="c1"># define the objective function</span>
<span class="lineno"> 6 </span>f.obj <span class="o">=</span> <span class="kt">c</span><span class="p">(</span><span class="m">1</span><span class="p">,</span> <span class="m">1</span><span class="o">+</span>rate_of_return<span class="p">[</span><span class="m">2</span><span class="p">],</span> <span class="m">-1</span><span class="o">+</span><span class="p">(</span><span class="m">1</span><span class="o">+</span>rate_of_return<span class="p">[</span><span class="m">3</span><span class="p">])</span><span class="o">^</span><span class="m">3</span><span class="p">,</span> <span class="m">-1</span><span class="o">+</span><span class="p">(</span><span class="m">1</span><span class="o">+</span>rate_of_return<span class="p">[</span><span class="m">4</span><span class="p">])</span><span class="o">^</span><span class="m">2</span><span class="p">,</span> <span class="p">(</span><span class="m">1</span><span class="o">+</span>rate_of_return<span class="p">[</span><span class="m">5</span><span class="p">])</span><span class="o">^</span><span class="m">3</span><span class="p">)</span>
<span class="lineno"> 7 </span><span class="c1"># define the constraints </span>
<span class="lineno"> 8 </span>f.con <span class="o">=</span> <span class="kp">t</span><span class="p">(</span><span class="kt">array</span><span class="p">(</span><span class="kt">c</span><span class="p">(</span><span class="m">1</span><span class="p">,</span> <span class="m">1</span><span class="p">,</span> <span class="m">0</span><span class="p">,</span> <span class="m">0</span><span class="p">,</span> <span class="m">1</span><span class="p">,</span> <span class="m">1</span><span class="p">,</span> <span class="m">1</span><span class="o">+</span>rate_of_return<span class="p">[</span><span class="m">2</span><span class="p">],</span> <span class="m">-1</span><span class="p">,</span> <span class="m">-1</span><span class="p">,</span> <span class="m">0</span><span class="p">),</span> <span class="kt">c</span><span class="p">(</span><span class="m">5</span><span class="p">,</span> <span class="m">2</span><span class="p">)))</span>
<span class="lineno"> 9 </span><span class="c1"># define the direction in the constraints</span>
<span class="lineno">10 </span>f.dir <span class="o">=</span> <span class="kt">c</span><span class="p">(</span><span class="s">"="</span><span class="p">,</span> <span class="s">">="</span><span class="p">)</span>
<span class="lineno">11 </span><span class="c1"># define the right hand side of the constraints</span>
<span class="lineno">12 </span>f.rhs <span class="o">=</span> <span class="kt">c</span><span class="p">(</span>total<span class="p">,</span> <span class="m">0</span><span class="p">)</span>
<span class="lineno">13 </span>
<span class="lineno">14 </span>solution <span class="o">=</span> lp<span class="p">(</span><span class="s">"max"</span><span class="p">,</span> f.obj<span class="p">,</span> f.con<span class="p">,</span> f.dir<span class="p">,</span> f.rhs<span class="p">)</span>
<span class="lineno">15 </span><span class="kp">cat</span><span class="p">(</span><span class="s">'x:'</span><span class="p">,</span> solution<span class="o">$</span>solution<span class="p">,</span> <span class="s">'\n'</span><span class="p">)</span>
<span class="lineno">16 </span><span class="kp">cat</span><span class="p">(</span><span class="s">'obj:'</span><span class="p">,</span> solution<span class="o">$</span>objval<span class="p">,</span> <span class="s">'\n'</span><span class="p">)</span></code></pre></figure></p>
<language>Python</language>
<p>chapter5/portfolio_construction.py<br />
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="kn">from</span> <span class="nn">ortools.linear_solver</span> <span class="k">import</span> <span class="n">pywraplp</span>
<span class="lineno"> 2 </span>
<span class="lineno"> 3 </span><span class="n">total</span> <span class="o">=</span> <span class="mf">10000.0</span>
<span class="lineno"> 4 </span><span class="n">rate_of_return</span> <span class="o">=</span> <span class="p">[</span><span class="mf">0.0</span><span class="p">,</span> <span class="mf">0.08</span><span class="p">,</span> <span class="mf">0.12</span><span class="p">,</span> <span class="mf">0.16</span><span class="p">,</span> <span class="mf">0.14</span><span class="p">]</span>
<span class="lineno"> 5 </span><span class="n">solver</span><span class="o">=</span><span class="n">pywraplp</span><span class="o">.</span><span class="n">Solver</span><span class="p">(</span><span class="s1">'PortfolioConstruction'</span><span class="p">,</span> <span class="n">pywraplp</span><span class="o">.</span><span class="n">Solver</span><span class="o">.</span><span class="n">GLOP_LINEAR_PROGRAMMING</span><span class="p">)</span>
<span class="lineno"> 6 </span>
<span class="lineno"> 7 </span><span class="c1"># create the variables</span>
<span class="lineno"> 8 </span><span class="n">x</span><span class="o">=</span><span class="p">[</span><span class="kc">None</span><span class="p">]</span><span class="o">*</span><span class="mi">5</span>
<span class="lineno"> 9 </span><span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">5</span><span class="p">):</span>
<span class="lineno">10 </span> <span class="n">x</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="o">=</span><span class="n">solver</span><span class="o">.</span><span class="n">NumVar</span><span class="p">(</span><span class="mf">0.0</span><span class="p">,</span> <span class="n">solver</span><span class="o">.</span><span class="n">infinity</span><span class="p">(),</span> <span class="s1">'x'</span><span class="o">+</span><span class="nb">str</span><span class="p">(</span><span class="n">i</span><span class="p">))</span>
<span class="lineno">11 </span>
<span class="lineno">12 </span><span class="c1"># create the constraints</span>
<span class="lineno">13 </span><span class="n">constraints</span> <span class="o">=</span> <span class="p">[</span><span class="kc">None</span><span class="p">]</span><span class="o">*</span><span class="mi">2</span>
<span class="lineno">14 </span><span class="c1"># set the equality constraint</span>
<span class="lineno">15 </span><span class="n">constraints</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">=</span> <span class="n">solver</span><span class="o">.</span><span class="n">Constraint</span><span class="p">(</span><span class="n">total</span><span class="p">,</span> <span class="n">total</span><span class="p">)</span>
<span class="lineno">16 </span><span class="n">constraints</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">SetCoefficient</span><span class="p">(</span><span class="n">x</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="mf">1.0</span><span class="p">)</span>
<span class="lineno">17 </span><span class="n">constraints</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">SetCoefficient</span><span class="p">(</span><span class="n">x</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="mf">1.0</span><span class="p">)</span>
<span class="lineno">18 </span><span class="n">constraints</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">SetCoefficient</span><span class="p">(</span><span class="n">x</span><span class="p">[</span><span class="mi">4</span><span class="p">],</span> <span class="mf">1.0</span><span class="p">)</span>
<span class="lineno">19 </span><span class="c1"># set the inequality constraint</span>
<span class="lineno">20 </span><span class="n">constraints</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="o">=</span> <span class="n">solver</span><span class="o">.</span><span class="n">Constraint</span><span class="p">(</span><span class="mf">0.0</span><span class="p">,</span> <span class="n">solver</span><span class="o">.</span><span class="n">infinity</span><span class="p">())</span>
<span class="lineno">21 </span><span class="n">constraints</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">.</span><span class="n">SetCoefficient</span><span class="p">(</span><span class="n">x</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="mf">1.0</span><span class="p">)</span>
<span class="lineno">22 </span><span class="n">constraints</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">.</span><span class="n">SetCoefficient</span><span class="p">(</span><span class="n">x</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="mf">1.0</span><span class="o">+</span><span class="n">rate_of_return</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span>
<span class="lineno">23 </span><span class="n">constraints</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">.</span><span class="n">SetCoefficient</span><span class="p">(</span><span class="n">x</span><span class="p">[</span><span class="mi">2</span><span class="p">],</span> <span class="o">-</span><span class="mf">1.0</span><span class="p">)</span>
<span class="lineno">24 </span><span class="n">constraints</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">.</span><span class="n">SetCoefficient</span><span class="p">(</span><span class="n">x</span><span class="p">[</span><span class="mi">3</span><span class="p">],</span> <span class="o">-</span><span class="mf">1.0</span><span class="p">)</span>
<span class="lineno">25 </span>
<span class="lineno">26 </span><span class="c1"># set objective function</span>
<span class="lineno">27 </span><span class="n">objective</span> <span class="o">=</span> <span class="n">solver</span><span class="o">.</span><span class="n">Objective</span><span class="p">()</span>
<span class="lineno">28 </span><span class="n">objective</span><span class="o">.</span><span class="n">SetCoefficient</span><span class="p">(</span><span class="n">x</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="mf">1.0</span><span class="p">)</span>
<span class="lineno">29 </span><span class="n">objective</span><span class="o">.</span><span class="n">SetCoefficient</span><span class="p">(</span><span class="n">x</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="mf">1.0</span><span class="o">+</span><span class="n">rate_of_return</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span>
<span class="lineno">30 </span><span class="n">objective</span><span class="o">.</span><span class="n">SetCoefficient</span><span class="p">(</span><span class="n">x</span><span class="p">[</span><span class="mi">2</span><span class="p">],</span> <span class="o">-</span><span class="mf">1.0</span><span class="o">+</span><span class="p">(</span><span class="mi">1</span><span class="o">+</span><span class="n">rate_of_return</span><span class="p">[</span><span class="mi">2</span><span class="p">])</span><span class="o">**</span><span class="mi">3</span><span class="p">)</span>
<span class="lineno">31 </span><span class="n">objective</span><span class="o">.</span><span class="n">SetCoefficient</span><span class="p">(</span><span class="n">x</span><span class="p">[</span><span class="mi">3</span><span class="p">],</span> <span class="o">-</span><span class="mf">1.0</span><span class="o">+</span><span class="p">(</span><span class="mi">1</span><span class="o">+</span><span class="n">rate_of_return</span><span class="p">[</span><span class="mi">3</span><span class="p">])</span><span class="o">**</span><span class="mi">2</span><span class="p">)</span>
<span class="lineno">32 </span><span class="n">objective</span><span class="o">.</span><span class="n">SetCoefficient</span><span class="p">(</span><span class="n">x</span><span class="p">[</span><span class="mi">4</span><span class="p">],</span> <span class="p">(</span><span class="mi">1</span><span class="o">+</span><span class="n">rate_of_return</span><span class="p">[</span><span class="mi">4</span><span class="p">])</span><span class="o">**</span><span class="mi">3</span><span class="p">)</span>
<span class="lineno">33 </span><span class="c1"># we want to maximize the objective function</span>
<span class="lineno">34 </span><span class="n">objective</span><span class="o">.</span><span class="n">SetMaximization</span><span class="p">()</span>
<span class="lineno">35 </span><span class="n">status</span> <span class="o">=</span> <span class="n">solver</span><span class="o">.</span><span class="n">Solve</span><span class="p">()</span>
<span class="lineno">36 </span><span class="k">if</span> <span class="n">status</span> <span class="o">==</span> <span class="n">solver</span><span class="o">.</span><span class="n">OPTIMAL</span><span class="p">:</span>
<span class="lineno">37 </span> <span class="n">sol</span> <span class="o">=</span> <span class="p">[</span><span class="n">e</span><span class="o">.</span><span class="n">solution_value</span><span class="p">()</span> <span class="k">for</span> <span class="n">e</span> <span class="ow">in</span> <span class="n">x</span><span class="p">]</span>
<span class="lineno">38 </span> <span class="nb">print</span><span class="p">(</span><span class="s2">"x: </span><span class="si">{0}</span><span class="s2">"</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">sol</span><span class="p">))</span>
<span class="lineno">39 </span> <span class="nb">print</span><span class="p">(</span><span class="s2">"objective: </span><span class="si">{0}</span><span class="s2">"</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">objective</span><span class="o">.</span><span class="n">Value</span><span class="p">()))</span></code></pre></figure></p>
<div class="codewrapper">
<div class="codeleft">
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno">1 </span><span class="o">></span> <span class="kn">source</span><span class="p">(</span> <span class="s">'portfolio_construction.R'</span><span class="p">)</span>
<span class="lineno">2 </span>x<span class="o">:</span> <span class="m">0</span> <span class="m">10000</span> <span class="m">10800</span> <span class="m">0</span> <span class="m">0</span>
<span class="lineno">3 </span>obj<span class="o">:</span> <span class="m">15173.22</span> </code></pre></figure></div>
<div class="coderight">
<language>Python</language>
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno">1 </span><span class="n">chapter5</span> <span class="err">$</span><span class="n">python3</span><span class="o">.</span><span class="mi">7</span> <span class="n">portfolio_construction</span><span class="o">.</span><span class="n">py</span>
<span class="lineno">2 </span><span class="n">x</span><span class="p">:</span> <span class="p">[</span><span class="mf">0.0</span><span class="p">,</span> <span class="mf">10000.0</span><span class="p">,</span> <span class="mf">10800.0</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">]</span>
<span class="lineno">3 </span><span class="n">objective</span><span class="p">:</span> <span class="mf">15173.222400000004</span></code></pre></figure></div>
</div>
<p>We see that the optimal portfolio allocation results in a net asset value of 15173.22 in year 5. You may notice that the Python solution is lengthy compared to the R solution. That is because the <code>ortools</code> interface in Python is in a <span class="caps">OOP</span> fashion and we add the constraint one by one. But for <code>lpSolve</code>, we utilize the compact matrix form \eqref{eq:5_1} to specify a LP problem. In <code>ortools</code>, we specify the bounds of a decision variable during its declaration. In <code>lpSolve</code>, all decisions are non-negative by default.</p>
<p>LP also has applications in machine learning, for example, Least absolute deviations (<span class="caps">LAD</span>) regression<sup class="footnote" id="fnr18"><a href="#fn18">18</a></sup> can be solved by LP. <span class="caps">LAD</span> regression is just another type of linear regression. The major difference between <span class="caps">LAD</span> regression and the linear regression we introduced in Chapter 4 is that the loss function to minimize in <span class="caps">LAD</span> is the sum of the absolute values of the residuals rather than the <span class="caps">SSR</span> loss.</p>
<h2 id="miscellaneous">Miscellaneous</h2>
<h3>Stochasticity</h3>
<p>The update schedule of gradient descent is straightforward. But what to do if the dataset is too big to fit into memory? That is a valid question, especially in the era of big data. There is a naive and brilliant solution – we can take a random sample from all the observations and only evaluate the loss function on the random sample in each iteration. Actually, this variant of gradient descent algorithm is called stochastic gradient descent.</p>
<h3>Coordinate descent</h3>
<p>You may wonder that if we can randomly take a subset from all the observations, can we also just take a subset of the variables $\boldsymbol{x}$? Yes, and sometimes it is quite useful. For example, in coordinate descent, in each iteration we can update a single coordinate $\boldsymbol{x}^j$ rather than the entire $\boldsymbol{x}$. Coordinate descent has been widely applied for the Lasso solution of the generalized linear models<sup class="footnote" id="fnr19"><a href="#fn19">19</a></sup>.</p>
<h3>Newton method</h3>
<p>Newton method looks similar to gradient descent, but it requires the Hessian matrix of the objective function (see \eqref{eq:newton_method}). In theory, the convergence of Newton method is faster than the gradient descent algorithm. But not all differentiable objective functions have second order derivatives; and the evaluation of Hessian matrix may also be computationally expensive.</p>
<p>$$<br />
\begin{equation}<br />
\begin{split}<br />
\boldsymbol{x}^{(k+1)}=\boldsymbol{x}^{(k)}-\gamma {H(\boldsymbol{x}^{(k)})}^{-1} \nabla f(\boldsymbol{x}^{(k)})<br />
\end{split}<br />
\label{eq:newton_method}<br />
\end{equation}<br />
$$</p>
<p>We have seen the proximal gradient descent method for Lasso. Actually, the second order derivatives can also be incorporated into the proximal gradient descent method, which leads to the proximal Newton method.</p>
<h3>Constrained optimization</h3>
<p>In general, constrained optimization has the following form.</p>
<p>$$<br />
\begin{equation}<br />
\begin{split}<br />
&\max_{\boldsymbol{x}} {f(\boldsymbol{x})}\\<br />
subject\quad to\quad & g_i(\boldsymbol{x})\leq{\boldsymbol{b}_i};i=1,…,m,\\<br />
& h_i(\boldsymbol{x})={\boldsymbol{c}_j};j=1,…,k,<br />
\end{split}<br />
\label{eq:constrainedoptimization}<br />
\end{equation}<br />
$$<br />
where $g_{i}$ is the inequality constraint and $h_{i}$ is the equality constraint, both of which could be linear or nonlinear. A constrained optimization problem may or may not be convex. Although there are some tools existing in R/Python for constrained optimization, they may fail if you just throw the problem into the tools. Thus, it is better to have a tailored treatment for the problem. In general, that requires understanding the basic theories for constrained optimization problems, such as the Lagrangian, and the Karush-Kuhn-Tucker (<span class="caps">KKT</span>) conditions<sup class="footnote" id="fnr2"><a href="#fn2">2</a></sup>.</p>
<h3>Quadratic programming</h3>
<p>Quadratic programming (QP)<sup class="footnote" id="fnr20"><a href="#fn20">20</a></sup> is also commonly seen in machine learning. For example, the optimization problems in vanilla linear regression, ridge linear regression and Lasso solution of linear regression all fall into the QP category. To see why the Lasso solution of linear regression is also QP, let’s convert the optimization problem from the unconstrained form \eqref{eq:5_lasso} to the constrained form \eqref{eq:5_lasso11} similar to the ridge regression.</p>
<p>$$<br />
\begin{equation}<br />
\begin{split}<br />
\min_{\boldsymbol{\hat\beta}}& \quad \boldsymbol{e}’\boldsymbol{e}\\<br />
subject\quad to\quad & \lambda \sum_{i=1}^p {|\boldsymbol{\beta}_i|}\leq t.<br />
\end{split}<br />
\label{eq:5_lasso11}<br />
\end{equation}<br />
$$</p>
<p>The $||$ operator in the constraint in \eqref{eq:5_lasso_1} can be eliminated by expanding the constraint to $2^p$ linear constraints.</p>
<p>$$<br />
\begin{equation}<br />
\begin{split}<br />
\min_{\boldsymbol{\hat\beta}} \quad \boldsymbol{e}’\boldsymbol{e}& \\<br />
subject \quad to & \\<br />
\lambda\boldsymbol{\beta}\sb{1}+\lambda\boldsymbol{\beta}\sb{2}+…+ & \lambda\boldsymbol{\beta}\sb{n} \leq t \\<br />
- \lambda\boldsymbol{\beta}\sb{1}+\lambda\boldsymbol{\beta}\sb{2}+…+ & \lambda\boldsymbol{\beta}\sb{n} \leq t \\<br />
& … \\<br />
- \lambda\boldsymbol{\beta}\sb{1}-\lambda\boldsymbol{\beta}\sb{2}-…- & \lambda\boldsymbol{\beta}\sb{n} \leq t.<br />
\end{split}<br />
\label{eq:5_lasso_2}<br />
\end{equation}<br />
$$</p>
<p>It is clear the problem specified in \eqref{eq:5_lasso_2} is a QP problem. The optimization of support vector machine (<span class="caps">SVM</span>) can also be formulated as a QP problem.</p>
<p>In R and Python, there are solvers<sup class="footnote" id="fnr21"><a href="#fn21">21</a></sup> designed specifically for QP problems, which can be used as a black box.</p>
<h3>Metaheuristic optimization</h3>
<p>Some optimization algorithms are specific to problems. For example, simplex method is major used for linear programming problem. In contrast, metaheuristic optimization algorithms are not problem-specific. Actually, metaheuristic algorithms are strategies to guide the search process for the solutions. There are a wide variety of metaheuristic algorithms in the literature, and one of my favorites is simulated annealing (SA)<sup class="footnote" id="fnr22"><a href="#fn22">22</a></sup>. SA algorithm can be used for both continuous and discrete optimization problems (or hybrid, such as mixed integer programming<sup class="footnote" id="fnr23"><a href="#fn23">23</a></sup>). The <code>optim</code> function in R has the SA algorithm implemented (set <code>method = 'SANN'</code>) for general purpose minimization with continuous variables. For discrete optimization problems, it usually requires customized implementation. The pseudocode of SA is given as follows.</p>
<ul>
<li>define the cooling schedule $T_k$, an energy function $E$, and initial solution $s_0$</li>
<li>for $k=1,…,m$<br />
- pick a random neighbor $s_k$ based on $s_{k-1}$<br />
- calculate the energy change $\delta=E(s_k)-E(s_{k-1})$<br />
- calculate the acceptance probability $P=e^{-\delta/T_k}$<br />
- if $P<random(0, 1)$, $s_k=s_{k-1}$</li>
<li>return $s_m$.</li>
</ul>
<p>Let’s see how we can apply SA for the famous traveling salesman problem.</p>
<appname>Application – Traveling salesman problem</appname>
<blockquote class="appquote">
<p>
<p>Given a list of cities, what is the shortest route that visits each city once and returns to the original city? This combinatorial optimization problem is the traveling salesman problem (<span class="caps">TSP</span>). When there are $K$ cities, the number of feasible solutions is equal to $(K-1)!/2$. It is difficult to enumerate and evaluate all solutions for a large number of cities. SA algorithm fits this problem quite well.</p>
</p>
<p>
<p>Let’s consider a toy problem with only 10 cities. To apply the SA algorithm, we need to determine a cooling schedule. For simplicity, we choose $T_k=T_0\alpha^k$. As for the energy function, the length of the route is a natural choice. For many problems, the challenge to apply SA is how to generate new solution from the incumbent. And most of the times there are various ways to do that, but some of which may not be efficient. In this <span class="caps">TSP</span> problem, we adopt a simple strategy, i.e., randomly select two different cities from the current route and switch them. If the switch results in a shorter route we would accept the switch; if the new route is longer, we sill could accept the switch with the acceptance probability.</p>
</p>
</blockquote>
<figure class="text-center">
<img src="figures/cities.png" alt="TSP with 10 cities represented by blue dots." style="display: block;margin: auto;" width="50%">
<figcaption class="centerfigcaption"><span class="caps">TSP</span> with 10 cities represented by blue dots.</figcaption>
</figure>
<p>The implementation of SA with this simple strategy is implemented in both R/Python as follows.</p>
<language>R</language>
<p>chapter5/<span class="caps">TSP</span>.R<br />
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno"> 1 </span><span class="kn">library</span><span class="p">(</span>R6<span class="p">)</span>
<span class="lineno"> 2 </span>
<span class="lineno"> 3 </span>TSP <span class="o">=</span> R6Class<span class="p">(</span>
<span class="lineno"> 4 </span> <span class="s">"TSP"</span><span class="p">,</span>
<span class="lineno"> 5 </span> public <span class="o">=</span> <span class="kt">list</span><span class="p">(</span>
<span class="lineno"> 6 </span> cities <span class="o">=</span> <span class="kc">NULL</span><span class="p">,</span>
<span class="lineno"> 7 </span> distance <span class="o">=</span> <span class="kc">NULL</span><span class="p">,</span>
<span class="lineno"> 8 </span> initialize <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>cities<span class="p">,</span> seed <span class="o">=</span> <span class="m">42</span><span class="p">)</span> <span class="p">{</span>
<span class="lineno"> 9 </span> <span class="kp">set.seed</span><span class="p">(</span>seed<span class="p">)</span>
<span class="lineno">10 </span> self<span class="o">$</span>cities <span class="o">=</span> cities
<span class="lineno">11 </span> self<span class="o">$</span>distance <span class="o">=</span> <span class="kp">as.matrix</span><span class="p">(</span>dist<span class="p">(</span>cities<span class="p">))</span>
<span class="lineno">12 </span> <span class="p">},</span>
<span class="lineno">13 </span> calculate_length <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>path<span class="p">)</span> <span class="p">{</span>
<span class="lineno">14 </span> l <span class="o">=</span> <span class="m">0.0</span>
<span class="lineno">15 </span> <span class="kr">for</span> <span class="p">(</span>i <span class="kr">in</span> <span class="m">2</span><span class="o">:</span><span class="kp">length</span><span class="p">(</span>path<span class="p">))</span> l <span class="o">=</span> l<span class="o">+</span>self<span class="o">$</span>distance<span class="p">[</span>path<span class="p">[</span>i<span class="m">-1</span><span class="p">],</span> path<span class="p">[</span>i<span class="p">]]</span>
<span class="lineno">16 </span> l <span class="o">+</span> self<span class="o">$</span>distance<span class="p">[</span>path<span class="p">[</span><span class="m">1</span><span class="p">],</span>path<span class="p">[</span><span class="kp">length</span><span class="p">(</span>path<span class="p">)]]</span>
<span class="lineno">17 </span> <span class="p">},</span>
<span class="lineno">18 </span> accept <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>T_k<span class="p">,</span> energy_old<span class="p">,</span> energy_new<span class="p">)</span> <span class="p">{</span>
<span class="lineno">19 </span> delta <span class="o">=</span> energy_new <span class="o">-</span> energy_old
<span class="lineno">20 </span> p <span class="o">=</span> <span class="kp">exp</span><span class="p">(</span><span class="o">-</span>delta<span class="o">/</span>T_k<span class="p">)</span>
<span class="lineno">21 </span> p<span class="o">>=</span>runif<span class="p">(</span><span class="m">1</span><span class="p">)</span>
<span class="lineno">22 </span> <span class="p">},</span>
<span class="lineno">23 </span> solve <span class="o">=</span> <span class="kr">function</span><span class="p">(</span>T_0<span class="p">,</span> alpha<span class="p">,</span> max_iter<span class="p">)</span> <span class="p">{</span>
<span class="lineno">24 </span> T_k <span class="o">=</span> T_0
<span class="lineno">25 </span> <span class="c1"># create the initial solution s0</span>
<span class="lineno">26 </span> s <span class="o">=</span> <span class="kp">sample</span><span class="p">(</span><span class="kp">nrow</span><span class="p">(</span>self<span class="o">$</span>distance<span class="p">),</span> <span class="kp">nrow</span><span class="p">(</span>self<span class="o">$</span>distance<span class="p">),</span> replace<span class="o">=</span><span class="kc">FALSE</span><span class="p">)</span>
<span class="lineno">27 </span> length_old <span class="o">=</span> self<span class="o">$</span>calculate_length<span class="p">(</span>s<span class="p">)</span>
<span class="lineno">28 </span> <span class="kr">for</span> <span class="p">(</span>i <span class="kr">in</span> <span class="m">1</span><span class="o">:</span>max_iter<span class="p">){</span>
<span class="lineno">29 </span> T_k <span class="o">=</span> T_k<span class="o">*</span>alpha
<span class="lineno">30 </span> <span class="c1"># we randomly exchange 2 cities in the route</span>
<span class="lineno">31 </span> candidates <span class="o">=</span> <span class="kp">sample</span><span class="p">(</span><span class="kp">nrow</span><span class="p">(</span>self<span class="o">$</span>distance<span class="p">),</span> <span class="m">2</span><span class="p">,</span> replace<span class="o">=</span><span class="kc">FALSE</span><span class="p">)</span>
<span class="lineno">32 </span> temp <span class="o">=</span> s<span class="p">[</span>candidates<span class="p">[</span><span class="m">1</span><span class="p">]]</span>
<span class="lineno">33 </span> s<span class="p">[</span>candidates<span class="p">[</span><span class="m">1</span><span class="p">]]</span> <span class="o">=</span> s<span class="p">[</span>candidates<span class="p">[</span><span class="m">2</span><span class="p">]]</span>
<span class="lineno">34 </span> s<span class="p">[</span>candidates<span class="p">[</span><span class="m">2</span><span class="p">]]</span> <span class="o">=</span> temp
<span class="lineno">35 </span> length_new <span class="o">=</span> self<span class="o">$</span>calculate_length<span class="p">(</span>s<span class="p">)</span>
<span class="lineno">36 </span> <span class="c1"># check if we accept or reject the exchange</span>
<span class="lineno">37 </span> <span class="kr">if</span> <span class="p">(</span>self<span class="o">$</span>accept<span class="p">(</span>T_k<span class="p">,</span> length_old<span class="p">,</span> length_new<span class="p">)){</span>
<span class="lineno">38 </span> <span class="c1"># accept</span>
<span class="lineno">39 </span> length_old <span class="o">=</span> length_new
<span class="lineno">40 </span> <span class="p">}</span><span class="kp">else</span><span class="p">{</span>
<span class="lineno">41 </span> <span class="c1"># reject</span>
<span class="lineno">42 </span> s<span class="p">[</span>candidates<span class="p">[</span><span class="m">2</span><span class="p">]]</span> <span class="o">=</span> s<span class="p">[</span>candidates<span class="p">[</span><span class="m">1</span><span class="p">]]</span>
<span class="lineno">43 </span> s<span class="p">[</span>candidates<span class="p">[</span><span class="m">1</span><span class="p">]]</span> <span class="o">=</span> temp
<span class="lineno">44 </span> <span class="p">}</span>
<span class="lineno">45 </span> <span class="p">}</span>
<span class="lineno">46 </span> <span class="kr">if</span> <span class="p">(</span>s<span class="p">[</span><span class="m">1</span><span class="p">]</span><span class="o">==</span><span class="m">1</span><span class="p">)</span> <span class="kr">return</span><span class="p">(</span><span class="kt">list</span><span class="p">(</span><span class="s">'s'</span><span class="o">=</span>s<span class="p">,</span><span class="s">'length'</span><span class="o">=</span>length_old<span class="p">))</span>
<span class="lineno">47 </span> start <span class="o">=</span> <span class="kp">which</span><span class="p">(</span>s<span class="o">==</span><span class="m">1</span><span class="p">)</span>
<span class="lineno">48 </span> s_reordered <span class="o">=</span> <span class="kt">c</span><span class="p">(</span>s<span class="p">[</span>start<span class="o">:</span><span class="kp">length</span><span class="p">(</span>s<span class="p">)],</span> s<span class="p">[</span><span class="m">1</span><span class="o">:</span>start<span class="m">-1</span><span class="p">])</span>
<span class="lineno">49 </span> <span class="kt">list</span><span class="p">(</span><span class="s">'s'</span><span class="o">=</span>s_reordered<span class="p">,</span><span class="s">'length'</span><span class="o">=</span>length_old<span class="p">)</span>
<span class="lineno">50 </span> <span class="p">}</span>
<span class="lineno">51 </span> <span class="p">)</span>
<span class="lineno">52 </span><span class="p">)</span></code></pre></figure></p>
<language>Python</language>
<p>chapter5/<span class="caps">TSP</span>.py<br />
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno"> 1 </span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="lineno"> 2 </span><span class="kn">from</span> <span class="nn">scipy.spatial.distance</span> <span class="k">import</span> <span class="n">cdist</span>
<span class="lineno"> 3 </span>
<span class="lineno"> 4 </span><span class="k">class</span> <span class="nc">TSP</span><span class="p">:</span>
<span class="lineno"> 5 </span> <span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">cities</span><span class="p">,</span> <span class="n">seed</span><span class="o">=</span><span class="mi">42</span><span class="p">):</span>
<span class="lineno"> 6 </span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">seed</span><span class="p">(</span><span class="n">seed</span><span class="p">)</span>
<span class="lineno"> 7 </span> <span class="bp">self</span><span class="o">.</span><span class="n">cities</span> <span class="o">=</span> <span class="n">cities</span>
<span class="lineno"> 8 </span> <span class="c1"># let's calculate the pairwise distance</span>
<span class="lineno"> 9 </span> <span class="bp">self</span><span class="o">.</span><span class="n">distance</span> <span class="o">=</span> <span class="n">cdist</span><span class="p">(</span><span class="n">cities</span><span class="p">,</span> <span class="n">cities</span><span class="p">)</span>
<span class="lineno">10 </span>
<span class="lineno">11 </span> <span class="k">def</span> <span class="nf">calculate_length</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">path</span><span class="p">):</span>
<span class="lineno">12 </span> <span class="n">l</span> <span class="o">=</span> <span class="mf">0.0</span>
<span class="lineno">13 </span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="nb">len</span><span class="p">(</span><span class="n">path</span><span class="p">)):</span>
<span class="lineno">14 </span> <span class="n">l</span> <span class="o">+=</span> <span class="bp">self</span><span class="o">.</span><span class="n">distance</span><span class="p">[</span><span class="n">path</span><span class="p">[</span><span class="n">i</span><span class="o">-</span><span class="mi">1</span><span class="p">],</span> <span class="n">path</span><span class="p">[</span><span class="n">i</span><span class="p">]]</span>
<span class="lineno">15 </span> <span class="k">return</span> <span class="n">l</span> <span class="o">+</span> <span class="bp">self</span><span class="o">.</span><span class="n">distance</span><span class="p">[</span><span class="n">path</span><span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">],</span> <span class="n">path</span><span class="p">[</span><span class="mi">0</span><span class="p">]]</span>
<span class="lineno">16 </span>
<span class="lineno">17 </span> <span class="k">def</span> <span class="nf">accept</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">T_k</span><span class="p">,</span> <span class="n">energy_old</span><span class="p">,</span> <span class="n">energy_new</span><span class="p">):</span>
<span class="lineno">18 </span> <span class="n">delta</span> <span class="o">=</span> <span class="n">energy_new</span> <span class="o">-</span> <span class="n">energy_old</span>
<span class="lineno">19 </span> <span class="n">p</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">exp</span><span class="p">(</span><span class="o">-</span><span class="n">delta</span><span class="o">/</span><span class="n">T_k</span><span class="p">)</span>
<span class="lineno">20 </span> <span class="k">return</span> <span class="n">p</span> <span class="o">>=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="n">low</span><span class="o">=</span><span class="mf">0.0</span><span class="p">,</span> <span class="n">high</span><span class="o">=</span><span class="mf">1.0</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="mi">1</span><span class="p">)[</span><span class="mi">0</span><span class="p">]</span>
<span class="lineno">21 </span>
<span class="lineno">22 </span> <span class="k">def</span> <span class="nf">solve</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">T_0</span><span class="p">,</span> <span class="n">alpha</span><span class="p">,</span> <span class="n">max_iter</span><span class="p">):</span>
<span class="lineno">23 </span> <span class="n">T_k</span> <span class="o">=</span> <span class="n">T_0</span>
<span class="lineno">24 </span> <span class="c1"># let's create an initial solution s0</span>
<span class="lineno">25 </span> <span class="n">s</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">permutation</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">cities</span><span class="p">))</span>
<span class="lineno">26 </span> <span class="n">length_old</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">calculate_length</span><span class="p">(</span><span class="n">s</span><span class="p">)</span>
<span class="lineno">27 </span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">max_iter</span><span class="p">):</span>
<span class="lineno">28 </span> <span class="n">T_k</span> <span class="o">*=</span> <span class="n">alpha</span>
<span class="lineno">29 </span> <span class="c1"># we randomly choose 2 cities and exchange their positions in the route</span>
<span class="lineno">30 </span> <span class="n">pos_1</span><span class="p">,</span> <span class="n">pos_2</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">choice</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">s</span><span class="p">),</span> <span class="mi">2</span><span class="p">,</span> <span class="n">replace</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
<span class="lineno">31 </span> <span class="n">s</span><span class="p">[</span><span class="n">pos_1</span><span class="p">],</span> <span class="n">s</span><span class="p">[</span><span class="n">pos_2</span><span class="p">]</span> <span class="o">=</span> <span class="n">s</span><span class="p">[</span><span class="n">pos_2</span><span class="p">],</span> <span class="n">s</span><span class="p">[</span><span class="n">pos_1</span><span class="p">]</span>
<span class="lineno">32 </span> <span class="n">length_new</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">calculate_length</span><span class="p">(</span><span class="n">s</span><span class="p">)</span>
<span class="lineno">33 </span> <span class="c1"># check if we want to accept the new solution or not</span>
<span class="lineno">34 </span> <span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">accept</span><span class="p">(</span><span class="n">T_k</span><span class="p">,</span> <span class="n">length_old</span><span class="p">,</span> <span class="n">length_new</span><span class="p">):</span>
<span class="lineno">35 </span> <span class="c1"># we accept the solution and update the old energy</span>
<span class="lineno">36 </span> <span class="n">length_old</span> <span class="o">=</span> <span class="n">length_new</span>
<span class="lineno">37 </span> <span class="k">else</span><span class="p">:</span>
<span class="lineno">38 </span> <span class="c1"># we reject the solution and reverse the switch</span>
<span class="lineno">39 </span> <span class="n">s</span><span class="p">[</span><span class="n">pos_1</span><span class="p">],</span> <span class="n">s</span><span class="p">[</span><span class="n">pos_2</span><span class="p">]</span> <span class="o">=</span> <span class="n">s</span><span class="p">[</span><span class="n">pos_2</span><span class="p">],</span> <span class="n">s</span><span class="p">[</span><span class="n">pos_1</span><span class="p">]</span>
<span class="lineno">40 </span> <span class="c1"># let's reorder the result</span>
<span class="lineno">41 </span> <span class="k">if</span> <span class="n">s</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span>
<span class="lineno">42 </span> <span class="k">return</span> <span class="n">s</span><span class="p">,</span> <span class="n">length_old</span>
<span class="lineno">43 </span> <span class="n">start</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">argwhere</span><span class="p">(</span><span class="n">s</span> <span class="o">==</span> <span class="mi">0</span><span class="p">)[</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">]</span>
<span class="lineno">44 </span> <span class="k">return</span> <span class="n">np</span><span class="o">.</span><span class="n">hstack</span><span class="p">((</span><span class="n">s</span><span class="p">[</span><span class="n">start</span><span class="p">:],</span> <span class="n">s</span><span class="p">[:</span><span class="n">start</span><span class="p">])),</span> <span class="n">length_old</span></code></pre></figure></p>
<p>Let’s run our <span class="caps">TSP</span> solver to see the results.</p>
<div class="codewrapper">
<div class="codeleft">
<language>R</language>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span></span><span class="lineno">1 </span><span class="o">></span> cities <span class="o">=</span> read.csv<span class="p">(</span> <span class="s">'chapter5/cities.csv'</span><span class="p">)</span>
<span class="lineno">2 </span><span class="o">></span> cities <span class="o">=</span> cities<span class="p">[,</span><span class="kt">c</span><span class="p">(</span><span class="s">'x'</span><span class="p">,</span><span class="s">'y'</span><span class="p">)]</span>
<span class="lineno">3 </span><span class="o">></span> tsp <span class="o">=</span> TSP<span class="o">$</span>new<span class="p">(</span>cities<span class="p">)</span>
<span class="lineno">4 </span><span class="o">></span> tsp<span class="o">$</span><span class="kp">solve</span><span class="p">(</span><span class="m">2000</span><span class="p">,</span><span class="m">0.99</span><span class="p">,</span><span class="m">2000</span><span class="p">)</span>
<span class="lineno">5 </span><span class="o">$</span>s
<span class="lineno">6 </span> <span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">1</span> <span class="m">3</span> <span class="m">2</span> <span class="m">10</span> <span class="m">5</span> <span class="m">6</span> <span class="m">9</span> <span class="m">4</span> <span class="m">7</span> <span class="m">8</span>
<span class="lineno">7 </span>
<span class="lineno">8 </span><span class="o">$</span><span class="kp">length</span>
<span class="lineno">9 </span><span class="p">[</span><span class="m">1</span><span class="p">]</span> <span class="m">47.37255</span></code></pre></figure></div>
<div class="coderight">
<language>Python</language>
<figure class="highlight"><pre><code class="language-python3" data-lang="python3"><span></span><span class="lineno">1 </span><span class="o">>>></span> <span class="kn">from</span> <span class="nn">TSP</span> <span class="k">import</span> <span class="n">TSP</span>
<span class="lineno">2 </span><span class="o">>>></span> <span class="n">cities</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">read_csv</span><span class="p">(</span> <span class="s1">'chapter5/cities.csv'</span><span class="p">)</span>
<span class="lineno">3 </span><span class="o">>>></span> <span class="n">cities</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="nb">zip</span><span class="p">(</span><span class="n">cities</span><span class="o">.</span><span class="n">x</span><span class="p">,</span> <span class="n">cities</span><span class="o">.</span><span class="n">y</span><span class="p">))</span>
<span class="lineno">4 </span><span class="o">>>></span> <span class="n">tsp</span> <span class="o">=</span> <span class="n">TSP</span><span class="p">(</span><span class="n">cities</span><span class="p">)</span>
<span class="lineno">5 </span><span class="o">>>></span> <span class="n">tsp</span><span class="o">.</span><span class="n">solve</span><span class="p">(</span><span class="mi">2000</span><span class="p">,</span> <span class="mf">0.99</span><span class="p">,</span> <span class="mi">2000</span><span class="p">)</span>
<span class="lineno">6 </span><span class="p">(</span><span class="n">array</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">7</span><span class="p">,</span> <span class="mi">6</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">5</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">9</span><span class="p">]),</span> <span class="mf">47.372551972154646</span><span class="p">)</span></code></pre></figure></div>
</div>
<p>In the implementation above, we set the initial temperature to $2000$, the cooling rate $\alpha$ to $0.99$. After 2000 iterations, we get the same routes from both implementations. In practice, it’s common to see metaheuristic algorithms get trapped in local optimums.</p>
<figure class="text-center">
<img src="figures/sa.png" alt="A Route found by our TSP solver." style="display: block;margin: auto;" width="40%">
<figcaption class="centerfigcaption">A Route found by our <span class="caps">TSP</span> solver.</figcaption>
</figure>
<hr />
<p style="vertical-align:middle;" class="footnote" id="fn1"><a href="#fnr1"><sup>1</sup></a> https://en.wikipedia.org/wiki/Operations_research</p>
<p class="footnote" id="fn2"><a href="#fnr2"><sup>2</sup></a> Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.</p>
<p class="footnote" id="fn3"><a href="#fnr3"><sup>3</sup></a> https://en.wikipedia.org/wiki/Norm_(mathematics)</p>
<p class="footnote" id="fn4"><a href="#fnr4"><sup>4</sup></a> https://en.wikipedia.org/wiki/Efficient_estimator</p>
<p class="footnote" id="fn5"><a href="#fnr5"><sup>5</sup></a> Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The elements of statistical learning. Springer series in statistics New York, NY, <span class="caps">USA</span>:, second edition, 2009.</p>