https://en.wikipedia.org/wiki/David_B._Fogel
This wonderful interview with David Fogel provides a rich and unique history and understanding of artificial intelligence. David has a deep background in evolutionary computing and neural networks. He is the President of Natural Selection, Inc. (NSI), which applies nature-inspired, computational intelligence algorithms for predictive analytics and forecasting. NSI has done work in medicine, industry, and defense.
David is a Fellow of the IEEE (Institute of Electrical and Electronics Engineers). He is the author of six books and over 200 publications in evolutionary computing and neural networks. David also developed Blondie24 in 1999, a machine that evolved itself into an expert checkers player. David tells us that story.
Other things we talk about:
-How did David’s dad help encourage David to be creative, curious?
-What type of computer did David use to create an intelligent checkers machine?
-How did David use machine learning to help his own tinnitus?
-How does David think about a new machine learning project from beginning to end?
Transcript
David Kruse: Hey everyone. Welcome to another episode of Flyover Labs. Today we are lucky enough to have David Fogel with us. And David is the President of Natural Selection. And what they do is, they are involved with predictive analytics and forecasting in a range of industries from medicine industry to different industries and defense. So David is a fellow of the IEEE and he is author of six books and over 200 publications around Evolutionary Computing and Neural Networks. And David also developed Blondie24 back in 1999, which is a machine that evolved into a – would be an expert to a chess players. So we’ll have to hear a little bit more about that of course. So I invited David on the show because he just has a really deep experience with creating algorithms to help solve problems and I was curious just to hear what David is doing now and hear more about his past. So David, thanks for joining us today.
David Fogel: It’s my pleasure. I am looking forward to it.
David Kruse: Great, great. And so first off, I mean you have quite a deep background, but do you mind kind of just giving us an overview on your background and we can dive into a few details on it as well.
David Fogel: Yeah sure. Well anytime I tell my story I have to start with my father. So my dad was very vocal and in 1960 he was unloaded from Frontera here in San Diego to the National Science Foundation in Washington DC and he was tasked with making a report for the U.S. Congress on what they should invest and they should research. One of the things he looked at among many was artificial intelligence research. Now at that time AI research was focused mainly on heroic systems which later became expert systems or on trying to build brain models. Of course in 1960 we didn’t know very much about how the brain worked and of course much of it is still a mystery, but back then it was really a mystery, so that wasn’t a very promisingly fruitful way to go and AI’s insight was a little different. Instead of looking at people either asking them for rules about what they do or try to emulate them by thinking about their brains, why not new people as just one product of an evolutionary process that’s been going on for 3.5 billion years or so and model that process on a computer, because it comes up with ingenious solutions to problems all the time, life and death problems all the time, and so his idea was to model that. So that was this evolutionary programming idea mean. He won a contract from the navy to research it. Congress said to him, ‘look, you know it’s very nice, but that’s not what we do here. So if you want to do that you can take this contract and form your own company or you can get back to work.’ So he said, ‘well, I’m forming my own company; I’ll see you later.’ So two guys left Congress with him and he had his own company called Decision Science Incorporated from about 1965 through 1981 or 1982 where it was bought up by a defense contractor and he was advancing all this evolutionary programming research and application though industry as opposed through academia. He finished his PhD in 1964 at UCLA. So he had the ability to do that, but that’s not what he wanted to do. He was definitely more of a practicing engineer than an academic engineer. So I came along and started working with him as I was finishing my Bachelors Degree in Statistics at UC Santa Barbara and I would work with him one day a week at the defense contractor and then after I graduated it became full time, and I have been working in full time in evolutionary algorithms and neural nets and everything since then. My dad passed away in ’07, but when I fished my PhD in 1992 from UCSD in Engineering, we decided to start our own company called Natural Selection Incorporated and it’s still around today. As you mentioned, I’m the President. My brother Gary is the CEO. His PhD is in Biology. So we like to joke that you’ve learnt a lot of engineering and I’ve learnt a lot of Biology over the years. So we have had – so we’ve had the opportunity to apply these sorts of computational intelligence methods to lots of different real world problems and also some things that we might just call fun and games to over 20 plus years of doing this with the company and now 32 years of my doing it in general.
David Kruse: Wow! Do you think growing up, because of your father’s background and his experience, like did he expose you to different ideas or things growing up that maybe other kids were not exposed to?
David Fogel: Yes, of course. So he was a very imaginative person, as one might imagine him to be, because he can come up with simulation evolution on computers in 1960. But he has other things, like in 1958 I believe he had the first patent on Active Voice Canceling Headsets, which it was nothing for him financially because it was way too early. The patent expired in the 1970’s and then Bose made a lot of money on it, so. Anyway back to your question, so yes, he would do things like, we would driving in the car and I would say something that I don’t know about and he would tell me something I don’t know about whatever it was. He would ask me to imagine engineering things that were pretty much impossible as a teenager. Let’s make a submarine that can fly. Well, why would you want to do that? I don’t know, but let’s think about it. And so we’d think about it for half an hour over dinner; we are on a game board, but we think about. So yeah he definitely imparted a spirit of investigation, curiosity, trying things, they may not always work, but you want to keep trying and I was very fortunate to have him as a dad.
David Kruse: That sounds wonderful. And before we get too far, can you just describe, probably it’s a – I know it’s a broad field, but evolutionary computing just so that everyone can feel for it.
David Fogel: Sure, yeah the idea of evolutionary algorithms or evolutionary computing is that in typical optimization that leads up until maybe 1990 or 2000, I guess maybe even today to some degree. The idea is you have a solution and you are trying to make it better, and so you have one solution to a problem, maybe it’s something about how to drive around town efficiently like GPS might have to do or FedEx, how do you fly airplanes efficiently or whatever the problem is. And you have one solution to it and then you try to improve that solution and typically there would be different ways in making changes to that solution that you could come up with. One might be, let’s make a random change. One might be, let’s make a change that’s based on some mathematically formula, but still there is just one solution. So the idea of evolution is that nature doesn’t work on one solution, it works on a population of solutions, regardless of the size of the population it might be a herd of elephants or it might be a population like a big bee hive or it might be something even with more of members like bacteria, let’s say in a pantry dish. So whatever it is there is a population of things and they are all looking for a solution that might be the right answer to the problem. Typically the problem is how do I say alive, how do I reproduce, how do I eat somebody else, whatever it might be, all right, it’s a combination of those things. So by modeling a collection of ideas instead of one idea and then having those ideas search for different better solutions, we can employee a random variation of selection process on a computer and have it signed solutions for things that we might not have been able to think of ourselves, or where we might not be able to discover them if we were just using a single coin trying to a find a solution.
David Kruse: And can you give an example of how you would model, like maybe you can give an example of a certain project you worked on. What do you model, kind of multiple ideas?
David Fogel: Sure. Well, it depends of course on the problem, because we try to take insight from the problem and then put it in a form that the computer can use to do that optimization. So just as an example let’s say we had a classic problem called the traveling salesman problem. That problem is where you are the salesman or saleswoman, well then the sales person and you have to start at home and you have a bunch of clients that you have to go visit, maybe 100 of them and you want to get to all of them in the shortest distance and then return home right or return home in the shortest distance. So what we would do in that case might be to represent the solution as a list of the clients or cities let’s say, the locations that you are going to go visit and then we might alter that by inverting a list of randomly putting one out and putting it back in another place. So those are kind of things that are discreet optimization problems. They don’t have a continues form; they have things that are discreet elements to them and so that’s one example. But another one you mentioned, my Blondie24 work which was checkers and Blondie25 was chess. In that case we are evolving a neural network. So a neural network is a bunch of functions that look like – supposed to look like how brain neurons work a little bit and we connect then up with variable weights. And so in that case we are not trying to find the order of them so much. We are just trying to find out what is the right connection strength between those weights. So that when we present it with a checker border or a chess position image, it’s going to know, I like that image or I don’t and it can use that in order to figure out which moves it should make in order to find more boards that it likes as supposed to ones the best. So in that case we are taking like real number on a computer and we are varying them maybe with a normal distribution, the bell curve constant distribution kind of thing. So it’s a different kind of form, because we are operating on a different structure for that particular problem. But again, I try to come at each problem and think what is the representation that I’m going to use that’s intuitive to me for that problem rather than have one representation that I’m trying to fit on the rest of the world.
David Kruse: Interesting and you kind of answered my next question a little bit, but you know what’s – how do you decide whether to use a evolutionary algorithm or a neural network on a particular project?
David Fogel: Well, sure. It’s not necessarily in either/or. As I mentioned sometimes you can do them together. You know hybridizing can be a good approach. But the first things I’m going to try and think about if I’m trying to decide on an evolutionary algorithm is do you need it? Because if it’s a simple enough problem where some sort of constant ingredient or at a calculus based method is going to give the right answer or if there is a statistical package out for predictive modeling that’s linier and you know your problem is linier, then it’s not the kind of thing that you need an evolutionary algorithm for and you are just wasting computational power to do it. So it has to be something that’s a complex enough problem where there is probably multiple local optima on our response surface that you are looking at and I can go into more detail about what that all means if that’s important, but if listeners are already familiar with the landscape and an energy surface and area surface, then typically there is multiple right answers. This model is not good enough or maybe not good at all and so that’s the kind of situation where you might look for an evolutionary algorithm. If you have neural net, then that’s a typical case where you are going to fine those sorts of energy response services, because you have typically many nodes in different layers and if its deep learning then its lots of layers. And so by the time you try to train something that has 5000, 50,000, 5 million, I don’t know, wait, you have a very complex landscape and that’s the kind of the things where a concrete ingredient method may get trapped in a local optimum that may not be good enough for you or an evolutionary method or some other random search method might be much better.
David Kruse: Interesting, okay. That makes sense, I’ve always, yeah – I’ve always wondered what was the difference, because I mean both those are pretty, I don’t know if popular is the right world. But definitely people are very interested in both of those these days and that makes sense.
David Fogel: Yes, absolutely and again they certainly should go hand in glove or you can choose one or the other depending up on what the application is. But I try to think about what structure am I representing and then do I need to do something that’s an evolutionary algorithm as opposed to some other concrete ingredient.
David Kruse: Got you, okay. And so you mentioned Blondie24 and I did too in the intro and so how did you get involved with games and programming games?
David Fogel: Well okay, so I’ve been a game person since I was a little kid. So that part’s easy you know. All the way to learning how to count cards and blackjack when I was 14, but I couldn’t do anything with it back then except to practice I guess. But I’ve always been involved in games and you know the funny things is when you look at the real world as an engineer, a lot of it looks like a big game. Whether you are trying to figure out how to optimize ticket prices at an event, you have someone who wants to maximize profits, you have a customer who wants to minimize expense, they all want to sell the opportunity, so they both want to go, right. They want you to go and you want to go but we have to find the right price, that’s kind of a game too. So games not always like checkers or chess that can be even a traveling salesman problem if you frame it correctly. And a lot of things that I worked on either in financial computing, predicting stock markets or in the chance to the homeland security involved two players, maybe more than two players where they are really kind of playing a game. So I was happen – for Blondie24 I happened to be working on the Island of Maili in Hawaii on breast cancer research using evolutionary neural network to help a monographer look at a mammogram and figure out whether there was cancer present and at the same time in 1997, May 1997 it happened to be the deep blue Garry Kasparov. So that was my inspiration for saying you know – I knew a lot about what went in Deep Blue. There is a lot of human engineering where the machine can look at 200 million positions per second. Garry Kasparov can only look at a handful of positions per second. Garry would tell you that the right hand handful and the wrong 200 million and that’s probably true, but that was one things. The other things is there was a lot of human knowledge that went into Deep Blue; things from pervious grand master games, NBA data bases, how do you slow the positions that are in there, that’s all algorithmic, but not machine learnt. They did try a little machine learning as I understand, but it didn’t really help them so they set it said. And what I wanted to do was we – so we made a program that would learn how to how to play chess just by showing it the pieces and here’s how they move and have it learn as much as it could on its own without giving it all that information. Because ultimately I think one of the objects for artificial intelligent research should be what can we get to learn that we wouldn’t know on our own and how do we solve problems that we don’t already know how to solve. If we already know how to solve the problems, it’s great, but we don’t know how to solve every problem. So what can we do with machines that would help us solve those problems? And I thought it was good idea, I called up a graduate student, a friend of mine Kumar Chellapilla and he thought it was a good idea too, he was happy to work with me on it, but we very quickly realized that we had no funding and we had one Pentium 450 megahertz machine running Windows NT. So we said, well you know change – how much money? I don’t know how much they spent on deep blue, but it was a lot okay and the whole team was just us. So we said lets to checkers first. So we did start working on checkers and actually you know we were quite successful with it or maybe I should say the computer was quite successful and in the end we ended up evolving around networks and looked at the pieces you have on the board, where are they, how can they move and make it learn the physics of checkers, we gave it that; and also the piece differential, how many more pieces you have than the opponent, and the rest of it, it had to evaluate – it had to learn how to evaluate that checker board on its own by evolving a neural net and that neural net had 5,000 or so weights in it. It looked at the board in various ways and then computed whatever functions it wanted to compute and said I like this board or I don’t like that board, but you know in the beginning most of those or all of the move were random in the beginning, so they all were terrible. It’s just that some of the population of terrible players were worst than others, so we killed the ones that were the worst and we made right mutation of the better one and long story shorter 840 generations which took six months to run, that’s what I call the miracle of this whole things that Windows NT didn’t crash in six months; it’s the biggest miracle. We went out and we tested the program against real people on Microsoftfilm.com and after 165 games of testing by hand we found that our program was ranked in the top 500 of the 120,000 people on games.com.
David Kruse: Wow! That’s crazy.
David Fogel: And the funny story of the Blondie24 also is that for the time we went out to play it was November and so I just made it, David 1101 was our name for me being David and it was November 1st. And we got the games and we were doing well, but nobody really wants to play against David 1101, who cares. So then the next summer came along as we were working on this and the Start Wars movie had stated up again. So we changed our names to Obi Wan The Jedi. And all of a sudden a lot of people wanted to play us and so it was great. We were getting a lot of games, but the better you get the more chance you have of playing higher rank players, which we wanted to do to really test how we were at master level, expert level or whatever it was. So it turns out that the expert level and master level players are very gracious whether they win or lose, but the ones that are little lower, which is the class A, class B I like to joke, they didn’t really have any class at all sometimes, because they would – you have a little set back right and you are in the game and then if they loosing they start swearing at you and they start taking a long time, like three minutes plus on every move and you get to a point where you way up on checkers and you know you are going to win, but it’s going to take over an hour to get there and also they are going to be swearing at you like crazy for that hour. So I turned to Kumar and said, ‘who are these guys who are swearing at us’ and we figured after a while well, it’s not women, you know women aren’t going to swear at you over checkers and it’s not older guys. As an older guy now I can tell you I don’t really associate my thoughts about checkers rating. So it must be young guys, so we figured, ‘hey what would mollify a young guys better than a young girl and so we changed our name to Blondie24 and we made a whole story about her. She is a 24 year old math major at UCST and she surfs and she skis and she is good looking and she is looking for a boyfriend and how she gets so good at checkers. Well she broke her leg skiing and had to lay off for six months, because that was kind of true, right. The machine had taken six months to get there. And then we went back out as Blondie24; it’s really hilarious. The same people who had just been swearing at me because I was talking very good notes about who was doing what. The same people who have just been swearing at me are now asking me you know repeatedly, come play with me on table 36 Blondie. They couldn’t be nicer to me and so when you get in there and you play a game with them and they start asking where are you and what are you wearing, and I’m like, you know you don’t want to know what I’m wearing, because I’m here in my pasture and they are like – But it was great. We finished that and wrote several tactical papers and a book about it and finally did get some funding from national science foundation to work on Chess as well. Got to meet Garry Kasparov as part of that process and in the end of that we ended up getting some wins over first phase, which was one of the top five chess programs in the work and also the first machine learning program in chess to defeat a human national ranked master. And so it’s been – it was quite a lot of fun. A lot more fun than I thought it was going to be even though I thought it was going to be a lot of fun. But very interesting and I wouldn’t have traded any of that.
David Kruse: Interesting. And so of course I have to ask, what do you think of Deep Mind winning during the champion go. Like was that a significant event or what are your thoughts on that?
David Fogel: I do think it’s a significant event. I think there are more significant events to come though, because there is still a lot of knowledge that goes into something like that with opening books from grand masters and positional value and things like that. So and of course a huge computational lift form GPUs. I forget how many are there. They had 48 of them or something like that, right. So yes, I think between me and colleges that I spoke with in Vancouver at our IEEE Computational Intelligence meeting, just a little while ago, I think by and large most of us were surprised that that result came about as quickly as it did, but in terms of where we need to go in order to make true scientific strides on autonomous machine learning without that information that’s been given, I think we still have a long way to go.
David Kruse: And how would that look like. Would that be more like your training your machine from scratch with no prior knowledge?
David Fogel: Yeah, exactly, exactly, kind of how we did it with Blondie24. We just started with kind of primitives and you let it play against itself and whatever primitives win you let them build up over time. The question is how much time does that take. And I was asked a question at a public lecture that I gave in Vancouver, which was a good question. When you are trying to design something as a solution to a problem as an engineer, that’s probably not the way you want to go right, because if we know things about a particular problem, we might know physics, we might know previous attempts in the solution, there might already be some form of a solution involved. We got to start with that and then improve from there. We don’t want to say, well let’s pretend we don’t know anything and let the machine learnt everything. So as an engineer, when I come to a problem I want to try to learn everything there is about that problem and use my intuition and insight and hopefully the engineers who already worked on it to have their own insight that we can leverage. But I think there are those problems where we say, we don’t really know what to do or we haven’t formulate as a problem yet. Well, how can we use AI to leverage that and have machine learning help us figure out what to look at and I those are the kind of things that still have a lot of room from breakthroughs.
David Kruse: Okay. And how, yeah how do you – like what type of research is needed in order to get those breakthroughs. Just continuing to create new algorithms and more training or you know how would you go about that improving that?
David Fogel: That’s a great question that I’m not sure that I have a great answer for it, because it’s kind of in the realm of what we don’t know. But I think part of it is going to be how we think about representing solutions to problems and maybe some representations are going to be more amendable to machine learning algorithms than others. I guess that we already know that, but which ones are going to be more amendable to having general problem solving capabilities and I think that’s really a wide open area. I think it’s going to be also more of a synergy between advances and robotics and advances and computer hardware, along with advances and algorithms, because the interaction of a device in its environment is really important and I think a lot of that gets missed up to this point. Its – we have to do something in simulation. Well, the simulation may not be exactly the same as being out in the real world. Self driving cars are around the corner and that’s fine. I still would like to see some videos of a woman with a baby carriage stepping off in from of a car that’s moving 30 miles an hour through an intersession with a green light and let me see how that self driving car handles that. And hopefully it handles it well, but those are things that we need to see in actuality, not in simulation right and none of us would trust it in simulation, so.
David Kruse: No, that’s true. And what type of projects are you working on now?
David Fogel: Well, we do a lot of different things at National Selection including a lot of biotech stuff, but also I’m working in financial computing. I have my spinoff company from Natural Selection Incorporated called Natural Selection Financial, which is an investment advisory company and I have been the Hedge Fund manager for two years previously using these sorts of algorithms for predicting stock market directional moves. And then the other things we work on is a program called the Effect-Check, which is E-F-F-E-C-T-C-H-E-C-K and people can find out more about it at effectcheck.com, which is a Sentiment Analysis Program and helps you figure out what emotional response maybe likely from communications that you are working on. So from marketing, to legal, to press releases or any other general communication if you want to try to have an effect in terms of emotions on people or maybe just to understand better how your own emotional reaction is going to be reflective on someone else, that can be a very useful tool. So between NSI, Natural Selection Financial and EffectCheck that’s the main things that I’m working on.
David Kruse: Interesting. You get involved with so many cool stuff between the games and the financial industry and the track and I mean I guess that’s the beauty of kind of you have this base knowledge you can pause with so many different fields, it’s interesting.
David Fogel: It is and it’s also the beauty of an algorithmic space that I was fortunate enough to get into because it’s not a pigeon hole to any one application. It really is diverse as nature around us, right. And so whether you are designing a ship to go beneath the surface and you want to model how a penguin happens to be designed, which is a nice and product of evaluation or a dolphin or something else, you know nature can give you engineering inspiration for many, many different things, whether its flying, driving a car, swimming, sending radio signals, all sorts of stuff and so there is really no limits to the types of applications that I’ve been exposed to or could be exposed to.
David Kruse: Interesting. And how was it running a hedge fund? I mean you don’t have to tell us exactly how you did, but did that help you? I mean once again it’s kind of like a game, right and…
David Fogel: It is, it is and it still is. So you know – you asked me – that’s a good question. And so we started Natural Selection Financial in 2006 and trading the SMP 500 and I’m not really sure what advertising rules are around stuff like that, so I’m just going to speak in the generality if anybody wants to follow for a specific, they can. So we did very well and we were bought out and as part of that buy out I became a Hedge Fund manager with about $60 million, but it was a very challenging time and not so much challenging for us, but challenging for the world, environment, the fire that happened in April of ’08 we started trading institutional money in August of ’08, then Lincoln Brothers went under at September. So there wasn’t a lot of other people who were – yeah, there weren’t a lot of other companies or firms or family offices that were doing a lot of allocation at that time, and after two years the fund, we had to withdraw the money and so we had to close that down. So it was like that and we started Natural Selection Financial as an investment advisory firm. But I met a lot of interesting people. I got to fly all over the world and meet people from Hong Kong and Switzerland and England and you know all over the place and go through a very interesting time where our methods are generally not correlated to the SMP 500 so it didn’t really matter to us performance wise whether the market was going up or down. So I think it was, it was a great timing on us getting acquired that way, but it was also unfortunately bad time with the fiscal crisis that happened at the same time and those things are out of our control, so we just have to make the best of it as we go.
David Kruse: Got you.
David Fogel: Whenever its involving problems, whatever it is – whatever the environment is, sometimes there’s tailwinds, sometimes there’s headwinds.
David Kruse: Got you. So with the financial advisory software do you help allocate like portfolios or can people hire you kind of as a third party to run some of their finds or…
David Fogel: Great question. So it’s also those. I do help with portfolio construction. The main part of the evolutionary algorithms that we work on are day to day trading though. So they are looking for patterns in the market and those patterns change over time and so the algorithms change over time by evaluation to continue to search for those patterns and decided whether or not they have an idea that the market might be going up or going down or it can’t tell for the next day and let me take a position for the next day if we have what the algorithm is telling us to. So it’s all systematic. It’s not just discretionary. It’s not a function of how I feel on a given day, I have nothing to do with it; it’s just the matter what the algorithms are telling us to do.
David Kruse: Interesting, okay. And I think we are just about out of time. Do you have time for a couple more quick questions or if you don’t?
David Fogel: Sure, let’s do it, let’s do it.
David Kruse: Okay, all right. Well I was curious about the, you mentioned biotech. Very curious what type of projects you are working on areas around drug discovery or what are you working on?
David Fogel: Well, I have done a lot within drug discovery and in fact even one of the first projects that we did in Natural Selection Incorporated back in the 1990 was on approaching [inaudible] and the computational methods, evolutionary methods for doing that with a company called Agron, which was later bought up by Water Limburg and that was bought up by Pfizer. So now it’s a software that will still be used by Pfizer even though it’s many, many years later now. But my brother does a lot of things, including things about how HIV evolves in different people, can predict how it’s going to change. We’ve also done things in looking at RNA analysis and trying to find normal RNA that people might be able to target as therapeutic sites. So it’s a wide variety of things and then also medical devices. So I mentioned the breast cancer research that we did, but I’ve also worked on things that are processes for helping people with ringing in their ears also called tinnitus. So there is a wide variety of different things that we’ve done on the biotech side and I suspect that we’ve only done a thimble full and the kind of stuff that we really could do if we had more opportunity.
David Kruse: Interesting. So with the ringing in the ears, what – how were you brought in and how did you help with that?
David Fogel: Well, I brought in, I was brought in because about 12 years ago a virus decided to get into my right ear and I started getting really bad ringing in my right ear and then it transferred over to both ears, which is an interesting process and that’s not where the virus went. But I went through the – probably the usual medical follow-ups that you would do and there’s unfortunately not too much that medical science has to offer still for those people suffering. And then I learnt that 50 million Americans had tinnitus disorder, 300 million with five to 10 million having at a level that’s depilating to their lives and I started thinking, ‘well, what can I do to help myself here?’ So I started looking at tinnitus masks you know I can listen to white noise. Unfortunately the brain likes to hear real sounds, but those were sounds that’s making up, hallucinating basically. So you can listen to white noise but who wants to listen to while noise, you can might as well listen to ringing. So I started thinking about it, could we change the white noise, could we mutate it somehow, could we make it adaptive to me, that will be most effective to me and over a process of about two months of engineering different wave forms, I was actually able to get something one day that I listened to it for about 30 seconds, I turned it off and the ringing went away. And then it gradually came back, then it gradually came back. It wisent like, ‘okay, your cured,’ no, it didn’t work like that. But I listened to it again and I would listen to it and over time I – actually it did go away and I very, very rarely now get any ringing in my ears every. I do sometimes when I’ve very tired, if I’m fighting a cold or something, but typically I would say I’m pretty much tinnitus free. So I took that and said what I really just did there was an interactive evolutionary algorithm. I was the objective function. I saw whether it worked for me and the process was done by hand. Now why don’t we make that automated. So we submitted our grants to NSF for that and we got funded and we built a system that was given a test on 16 patients in clinical trial. We also bought half of them and we’ve still been looking for a Phase 2 funding on that, but we do have a patent on that that was issued in 2009 if I recall. So we are still looking for people to pick up on that and hopefully get it to be approved FDA device that other people can benefit from.
David Kruse: Interesting. My wife has that issue sometimes so I was curious. Yeah.
David Fogel: Yeah, a lot of people do.
David Kruse: There is not much you can do. That’s really interesting.
David Fogel: Yeah, a lot of people do.
David Kruse: Okay, I know we are pretty much out of time, but the last question and maybe you don’t have much free time, because you sound like you are working on a lot of really interesting projects. But I was curious, do you – if you have free time do you know, do you work on any side projects. I guess this ear ringing was maybe almost a side project for you at one point.
David Fogel: Everything is a side project to some degree I guess. You have to minimize the ones that are real projects. So when I’m not spending time with the family, you know then I would say for hobbies I enjoy playing piano. I played since I was five years old and I was very lucky. Again I got to play professionally at a local hotel in San Diego for 16 years from 1994 to 2010 once a week or twice a week. And then I’m also now getting into astrophotography. So it’s nice to go out to the mountains here in San Diego or to the dessert. Just on the other side of the mountains it’s pretty dark skies and finding out what it takes to take decent pictures of andromeda galaxy and things that are further away.
David Kruse: Oh! Interesting, I would love that. All right well, I could talk to you for a long time, but we should probably cut it off and unfortunately, but this has been great. David I really appreciate hearing about what you are working on and what you have worked on and definitely very inspiring and I like your energy, so that’s a…
David Fogel: Thank you. It’s been fun and if you want to do a part two sometimes, just let me know.
David Kruse: Yes, I might take you up on that. But you know, I appreciate it and thanks again David and thanks everyone for listening to another episode of Flyover Labs. As always, I appreciate it. Bye everyone.