Developer Journal: MVC and Interoperability

The desire for code re-use is a strong one among considerations of how to design a program or application. This competes with the desire to implement applications in their native, and oftentimes proprietary, framework.

Arguably the most important part of a mobile application to be developed natively is the view code. Non-native views on any sufficiently complex data-centered app are almost always immediately identified by users: web views and non-standard behavior almost always give it away. Furthermore, perhaps the least important part of an application to be native is the model. The storing, manipulation, and retrieval of data is oftentimes very similar across languages and frameworks: a linked-list is a linked-list no matter what language it’s in.

Perhaps we can use this dichotomy to best satisfy the competing desires of code re-use and native implementations: keep your views and controllers in native code, but implement your data model in shared code. Specifically, I’m wondering if I can move an application’s model to C++ on iOS, Android, and Windows Phone and only implement views and minimal controllers in native code.

In fact, ideally, all data-related tasks would be handy to have in shared code: anything involving databases, making requests to servers, and parsing responses are the taks which come to my mind. I think that in order to achieve this, I’m going to need to compile a framework for manipulating a database, a framework for making network connections, and a way to have native code request and receive data.

I think a design like this at least works, but it could even be desirable. Maybe the best way for me to find out would be to try it.

iPhone 6 Plus First Impressions

I was a holdout. For three years, I used the magnificent iPhone 4S as my trusty telephone. When I upgraded from a feature-phone Nokia handset to an iPhone 4S, all the things I could do made me forget how I did without a smartphone: get all your emails on-the-go, use the decent web-browser to do tasks if a computer wasn’t around, keep yourself completely amused in all idle moments.

A week ago, I picked up the Ridiculously Big iPhone® and it is also one of those products that I already can’t remember how I did without.

My 4S really complimented an iPad well for some tasks: where the phone could send a quick message, the tablet could comfortably guide you through a book.

This 6 Plus, on the other hand, does not play so well with an iPad. It demands use, because as big as it is, it does fit in your pocket … barely, it has a screen that is just shorter than the iPad is wide, and it always has Internet connection (and to connect your iPad to 4G is quite pricey). It’ll send your quick message and then guide you through that book as you quickly switch to your train-ticket app, or whatever else it is you do.

Any media shines on the 6 Plus: it’s in your pocket, so you can play music; it has a massive screen, so you can play games or browse the web or read a book; the portrait keyboard is well-suited to two-handed use, so you needn’t shy from heavy-input use …

But there’s one thing that totally sucks about the 6 Plus.

Checking the time.

Oh I know, the humanity, you’re walking somewhere to do some lovely fun activity or something and you want to check if you’re late and you have to pull out a 5.5 inch telephone to find out. How hard.

But seriously, this device isn’t great for glance-able information, using it demands attention.

But Apple has no need to create the need for a product category which involves glance-able information.

Right guys?

Right?

'The Productive Programmer' Review

It’s a natural obsession for programmers: the more effective code you can write in a fixed amount of time, the better. This obsession, which is perhaps better labeled a professional narcissism, is well-indulged by Neal Ford’s The Productive Programmer. The book’s central theme is that many of what makes computers highly usable also slows a user down, and that by taking inspiration from the way that the Super Clever People That Made Computers, we can make our use more effective.

Ford’s splits the goal of becoming more productive into two parts: the “mechanics” and the “practice.” Mechanics are about actual tools and code-snippets: using a launcher, a more advanced clipboard, terminal add-ons for your filesystem navigator, and even code snippets for scripts. While some of the software suggestions are a little dated even only a few years after publication, the take-away message is still intact: he describes the sorts of interactions you want to seek to have with a computer programmer.

You should avoid using the mouse when possible, you should minimize the amount of clicks needed when absolutely necessary, you should use searching to find applications, programs, and you should find a way to allow your computer to parse commands from your intent. More generally, The Productive Programmer advises you not to repeat yourself: if you tell your computer to do something, chances are you’ll want to do that again later, so automate it, speed it up, minimize the effort.

The “practice” that Ford describes is oriented towards software development advice, the sorts of methodologies and development styles that decrease wasted time. For instance, many if not all development shops use a version control tool, which allows a developer to revert to a version with an important code-snippet or a working build or whatever it may be. But there are many more tools and processes which can help equally as much.

My favorite of these was the advice to use a canonical build machine. I have squandered many hours setting up a new machine with old code, finding libraries and versions of programming languages and getting the right configuration. Instead, Ford advises you use a machine with has all the tools and libraries and version required to run your piece of software. With this sort of machine, it’s unambiguous how to run your app, and you can even image new machines from the canonical machine.

The Productive Programmer is an accessible introduction for the ambitious user/soon-to-be power-user. As should be expected, some of the tools Ford recommends are a bit dated (though most of them are still around). But the method and principles Ford exemplifies are simultaneously from a Golden Era of computing and a good vision for the future.

WWDC2014 Reaction

Apple’s Worldwide Developer Conference may be one of the most misunderstood conferences held by a public-facing company. The relationship between what the event is and how the event is perceivedmakes for frustrating comment sections on tech blogs, news outlets, and developer communities. Well, more so the former and less so the latter.

The WWDC is an annual conference Apple holds to tempt developers into developing on their platforms with their technologies. The bulk of what the conference is are the workshops and presentations on software engineering. Conversely, the bulk of how the conference is perceived is the conferences keynote presentation, of which the WWDC has become nearly synonymous with.

As a recovering fanboy and a budding developer, I offer this post to interpret what happened at the event and specify what I am most excited about.

First, I will outline some of my thoughts about the changes to OS X. This is followed by the changes to iOS. Interestingly, however, the best of the WWDC announcement did not exist on each platform alone, but rather in the interplay between the platforms. That is, it is not in any single new software feature, but in the single experience that Apple is cultivating across its different products. So that’s what I’ll cover after OS X and iOS. Finally, I will cover some of the news that Apple had for developers.

OS X Yosemite

Apple opened the event with the updates to its desktop operating system, OS X. Aside from bumping the version number from 10.9 to 10.10, Apple has brought a new look, some new core features, an update to Spotlight (hit Cmd-Space to find out what that is), some radical Safari changes, a few convenient Mail changes, but most importantly, I think, an revamp of iCloud’s capabilities.

The new look is trivially a big part of how Apple’s operating system is perceived but more crucially a big part of how the event is perceived. So what specifically has changed and what does it have to do with the perception of Apple and the WWDC? The changes are:

  • Flat design, (fewer color gradients)
  • iOS-style translucency and Gaussian blurs
  • Helvetica Neue replaces Lucida Grande

These changes are all in the right direction for a modern look for Apple’s desktop OS, says the fanbody devil sitting on my left shoulder, but it is not really why the WWDC’s annoucements for OS X are exciting, says the developer angel on my right shoulder.

Before what’s exciting, an interesting pattern emerges from noticing that the new Spotlight sherlocks a smart search tool known as Alfred. (Sherlocking is when third-party developers ship a particular feature first and Apple subsequently implement and release it themselves.)

The feature that matters the most announced yesterday is called iCloud Drive. If Spotlight sherlocks Alfred, then iCloud Drive sherlocks Dropbox andGoogle Drive. Contrary to the per-app file management strategy that Apple has been taking, iCloud has been opened up to allow you to manage everything in iCloud via Finder. This includes everything that your iOS apps store there, anything you want to share between all your Macs, and anything you’d like to send to your apps.

Interestingly, this the latest in a series of features which could be uncharitably characterized as exemplify the Steve Jobs quote which was aptly stolen from Picasso, that “Good artists copy, great artists steal.” As Gizmodo point out, Apple have done this before with Instapaper and Safari Reading List.

One of the reasons that iCloud Drive is so important going forward is that I have heard many times, take the general sentiment to be, and it be the case that Apple’s web services are underpowered. Compare iCloud’s present collaboration and file-sharing functionality to Dropbox and Google. Dropbox is the de facto standard for sharing filesystems, and Google Drive is the de facto standard for sharing and collaborating on files.

With iCloud Drive, Apple is competitively placed to implicitly takeover both of these use cases on Macs and iOS devices. Google Drive and Dropbox are software additions to any machine, require separate accounts and configuration, etc. Conversely, every new Mac shipped and all Mac OS X upgrades will feature this tool by default, and with the advent of BYOD, Apple is positioned to be the default.

This is not the only place that Apple continues to wage its “thermonuclear war” on its old friend, the updates to Safari place Wikipedia results and other items before conventional web search autocomplete.

tl;dr What you should expect is that in the Fall, a visual overhaul of your Mac will be released. It will feature a number of improvements, the biggest of which is iCloud Drive, which will give you an OS integrated Dropbox and Google Drive feature set.

From a developer’s technical point-of-view, the improvements to OS X are tepid at best, heated by the monumental improvement to Apple’s web services. From a fanboy’s point-of-view, I cannot wait to have the visual updates to OS X light up my Retina display.

iOS 8

Contrary to the glaring visual changes to OS X, the changes to iOS are not visual at all. What iOS 8 is to iOS 7 is what Mountain Lion is to Lion: a subtle but global fleshing out of the functionality. Arguably, many of the features which iOS 8 brings, like improvements to Siri, a predictive keyboard, interactive notifications, and a better photo management system, should have come to iOS a while ago. What I mean is that competitors have shipped these items before Apple has, much before. The case-and-point is how Android-y the predictive keyboard is and Google’s ever better Siri clone.

One of the most surprising announcements that Apple made was that it is allowing third-party developers to create and ship software keyboards to iOS devices. The rationale for disallowing the practice has historically been than it opens up a number of security and experience issues. Specifically, if a developer has access to your keyboard, and developer has access to everything you type, and perhaps you type sensitive or personal information. Further, should you pick up any iOS device and are accustomed to the platform, you will instantly be able to begin typing. Some software keyboards are very different from conventional software and hardware keyboards, including everything from swiping to drawing.

In similar vein, iOS 8 gives developers a way to give users a safe means to have their apps interact with one another. For instance, if you are creating a social network and want users on iOS to have your share sheet presented in the OS when a user wants to share, iOS 8 gives developers a way to do this. Or if Instagram want to allow other photo applications to use its filters, iOS 8 gives Instagram’s developers a way to offer that service. The reason this is similar is because this has been disallowed by Apple in the past for security reasons: apps are sandboxed to their own files to protect the user’s other files.

One of the reasons apps have been so successful is that there is little to no risk of any app changing your phone in a way that a Windows XP malware might. The types of permissions that Windows XP granted to executables was much greater than the permissions an iOS app has, which is why a user can be quite careless about what they install in a manner that proved to be quite catastrophic on more lenient systems.

The common denominator of the additions of third-party keyboards and app interactivity is that it is an anachronistic “highly controlled openness.” What I mean by this is twofold:

  1. Yes, you can now have app’s interact and change your default keyboard.
  2. No, potential attacks are not possible in the way they are on competing products.

Specifically, the reason that this approach is not prone to malware is that keyboard are not given default access to the network, for instance. If a keyboard cannot connect to the internet, a sketchy company cannot make a key-logger without your permission. Further, the inter-app communication is a form of openness in that it allows developers to have deeper access to the interactions outside of their application, but it protects the user by sandboxing the inter-app communication.

I take this to be how inter-app communication and third-party keyboards “should have” been done in that it is the sleekest and safest way it has been implemented so far. I hope that the common denominator between these two new features and iCloud Drive is this: When I use iCloud Drive, I hope it has the feeling of being late to the party, but being the best dressed.

Apple takes further aim at existing app developers of the more ephemeral social networks which have been enabled by more powerful devices and people’s desire to take increasingly complex selfies, namely, Snapchat’s feature of being able to quick send video and images to another on a timer. In iOS 8, the Messages app allows you to send video, audio, picture, and location to others.

Another strategy Apple is taking with iOS 8 is defining central location for existing but disparate services. The health quantification apps are all over the place: lots of hardware, lots of software, and little communication or unified direction. With iOS 8’s HealtKit and Health app, Apple has defined a way (HealthKit) for all of these developers and manufacturers to centralize the information and services they provide into a single place (Health app).

Apple mirrors this approach for home automation products and services. With iOS 8’s HomeKit and Siri, Apple has defined a way (HomeKit) for all these developers and manufacturers to centralize the informations and services they provide into a single place (Siri). How does Siri control your home? Well, you need only ask her. When you return home from a long day, you need simply groan into your phone “I’m going to bed.” and Siri will know to lock you garage, dim your lights, lock the door, and check that the dog has enough water.

tl;dr Apple’s iOS 8 will give you things you’ve wanted for a long time: interactive notifications, third-party keyboards, family iTunes accounts, improved photo management, improved Siri, and inter-app communication. It will also give you features you didn’t know you wanted, but come to think of it, it is the future: centralized and powerful health quantification and centralized and intuitive home automation. All of these services share a few common and very Apple-y common denominators:

  1. The features take existing services and integrate them at the OS level,
  2. The features are late but are much better in virtue of being integrated,
  3. The features are better in part because of how secure they are.

From a developer’s technical point-of-view, the updates that Apple is bringing to iOS are monumental, especially when taken in tandem with the framework updates Apple is making. From a fanboy’s point of view, the updates to iOS are tepid at best, not only are there no exciting visual changes, but the most of the added functionality is long overdue.

iCloud and Continuity

If OS X is the Father which was there at the beginning and iOS is the Son that redeemed Apple as a company, then iCloud and Continuity is the Holy Ghost, the ever-present and all-knowing space between your phone, your tablet, and you computer. It is in this space that the WWDC was most exciting to me as a user. On this front, Apple announced four new features:

  1. Handoff,
  2. Airdrop between platforms,
  3. Instant hotspot, and
  4. SMS and phones calls on all platforms.

Handoff is a feature that allows you to begin working on an email or a document on any one of your devices, and subsequently continue working on that email or document on any other device instantly. For instance, if you are working in Pages on a blog post and you want to move from your desk with a desktop computer to the conference room with your tablet, when you open your tablet you will have an indicator at the bottom of the screen to open up and continue work on that document. For far too long have I carefully selected which device I choose to work on a given task on because of limitations and typing and portability, and I am very pleased that I am now empowered to just use whatever it is I am presently on without the need to awkwardly transfer files.

However, if I do want a one-time transfer of a file from my Mac to my iDevice, the updated AirDrop allows me to do that. This is going to be very convenient for when I, as I have found myself, need to transfer a file on my phone to someone who is working on their Mac or vice-versa. This is a much requested and workhorse feature whose utility should be evident. Much in the same boat is Apple’s now easier Instant Hotspot feature, which allows me to use my phone’s cellular data as the Internet for my Mac, which is another hugely convenient addition.

The feature I am most excited to get my hands on as a user, however, is that no longer do I have to use my phone exclusively to make phone calls or use SMS. When I pair my phone with my computer and my tablet, now I can use those protocols from any of my devices. Hallelujah.

But not only is this interesting from a user point-of-view, but from a strategic point-of-view. With Facetime and iMessage, Apple entered the telecommunications market subversively. Facetime Audio and iMessage are barely noticeable from the standpoint of the user, they are simply a more convenient and feature-rich version of what telecommunications companies already offer them. In fact, many other companies offer instant messaging and VoIP. What’s different about Facetime and iMessage is that they are seamlessly integrated into your existing SMS and telephone, technologies that have not much changed in the last one hundred years. By expanding its influence to all SMS and phone calls, Apple is positioning itself to quietly topple public-facing telecommunications companies from bottom-up.

tl;dr If you own all three or any two of the Apple’s product categories, the intercommunication and shared experience are better than they have ever been or are anywhere else. Where Google’s Android is ubiquitous and Microsoft’s Windows is homogenous, Apple’s OS X and iOS are seamless. More simply, you’ll be able to share documents, share your 4G, take phone calls on any device, and send/receive SMS on any device.

The user’s perspective transcends the developer/fanboy divide, as it will help me do all of my tasks better and allows me to use my favorite devices more.

DEV

Apple announced a new programming language to replace Objective-C, and that language is called Swift. The features presented in the keynote were very, very exciting, and most of all its “Playground” feature. What Playground seems to be is that when you are writing code in Xcode and Swift using Playground, when your code compiles Xcode performs some introspection and analysis on it to display on the right hand side a visualization of what your code does. So, for instance, if I write a loop which runs 100 times and moves a UI element from the bottom of the screen to the top, Playground will show that it runs 100 times (if I coded it correctly) and show me the UI element’s movement right from Xcode.

I see Playground as being one of the first seriously compelling reasons to move from a terminal based text editor to an IDE. Of course there are others, IDEs make it easier to use debugging tools and have less of a learning curve. But I have not known a task that was impossible with my favored vim until this Playground feature was demoed.

The fanboy in me obviously doesn’t care about a new programming language, but you may be surprised to learn that the developer in me in strangely apathetic as well. Until some more information is released and I get the opportunity to try writing an app in Swift for the first time, I reserve my judgement about it. Why? Because, frankly, learning a programming language, and especially learning it well and fully, is very hard. Furthermore, Swift, like Objective-C, is a platform specific language. Of course you canuse Objective-C with GCC on any machine, but it is Cocoa that really makes Objective-C a pleasure to develop in.

What is revolutionary from a development point-of-view, however, is Apple’s announcement of “CloudKit.” CloudKit is an API a developer can use to securely store and efficiently retrieve cloud-based data as thought it were in a local database. Apps have become much more stack-heavy in recent years: When you develop an application for an Apple product these days, you are not just developing for one device but for the entire ecosystem. It used to be that an iPhone app would mostly just run on the intended device and maybe eventually the Web. Now, application development requires a back-end for authentication, accounts, in-app purchases, and analytics. This is a massive undertaking for a lone developer looking to publish their idea. CloudKit allows me to do what I do well, compiled, on-device application development, even more powerfully because I can define the server-side logic on device and off-shore the task of running and maintaining to Apple’s servers. Revolutionary.

Practicing Philosophy

The Unofficial Guide to Getting the Most of Undergraduate Philosophy at Rutgers University

A warning. Philosophy will keep you up at night. Your consciousness might be an illusion. Skepticism looms over everything you thought you knew. Our understanding of time may be fundamentally flawed. You could be incapable of expressing yourself to others, doomed to loneliness forever. It’s possible you’re part of a sociopolitical machine which deals systematic injustice. Or maybe there isn’t such a thing as morality.

But, like many things that are worth losing sleep over, philosophy has been neatly regimented into professional academia for hundreds of years. I’d like to offer this guide as an invitation to the major. I’ll share what I’ve learned about why and how to study philosophy as an undergraduate at Rutgers.

First, an introduction: Rutgers University houses one of the top three philosophy departments in the world. Not only that, but the department just received a $3 million donation from the Andrew W. Mellon Foundation and an anonymous donor to fund the department’s first endowed chair. This will be sure to bring another of the world’s top philosophers to Rutgers. Here and now is the best time and place to start a philosophy major or minor, and here’s how …

The first step: The Philosophy Club

If you’ve made it this far, then you’ve probably always considered yourself philosophically minded, but are unsure if you’re really interested in the major. Or alternatively, you’re in the major and you’re looking to broaden your philosophical thinking. The Rutgers Undergraduate Philosophy Club is perfect for this.

Picture Greek philosophers, and you see togas and beards. But, when you picture Rutgers philosophy, you should see a conference table flanked on every side by sharp students led by a distinguished member of faculty, or a rising star in Rutgers’ graduate program. Since its creation in its present form last year, the Rutgers Philosophy Club has been the best place for undergraduates to connect with a broad array of philosophical topics. The meetings are held on Friday’s at 5:00PM. While they officially end at 6:30PM, it is often buzzing long after that. The meetings are open to all.

While every speaker is free to choose his or her own format, the most frequent is a presentation, followed by a question-and-answer session. Every meeting is entirely independent from all of the others, and often the presentations assume nothing about the audience’s philosophical background. Likewise, the curious-minded are free to wander in to whatever meeting they choose. This is because the Rutgers Philosophy Club practices “analytic philosophy,” which strives to be straightforwardly clear about both the question being asked and the answer given.

So, who are professional philosophers and what types of questions do they ask? Modern philosophy is practiced by all sorts of folks, and investigates issues like: What is real? How do we know? What is good? What is beautiful? What is just? What is the mind? What is language? What is science? And even, perhaps a bit vainly, what is philosophy?

The Philosophy Club has been honored to host Rutgers faculty members Prof. Peter Klein, Prof. Douglas Husak, and Prof. Alvin Goldman to talk on these topics. Of graduate students, the Philosophy Club has also formerly hosted Lisa Mirrachi, David Black, Rodrigo Borges, Marilie Coetsee, and Michael Smith, who came to share their philosophical insight.

Regardless of what classes you’ve taken or what your background is, the answers to these philosophical questions are ones you have views on! Do you jump off of cliffs contemplating the meaninglessness of everything and how you cannot know about gravity? Do you see how that would be a badthing for you to do? That it would causeyou as person to cease to exist? That it would be unfairon your family? Philosophy Club is a setting where you can learn about yourself, and develop your views on these fundamental issues.

The next step: Making Philosophy

Students are not restricted to being on the audience’s side of the conference table, however. An important part of the student philosopher’s philosophical progress is expressing their ideas to others, seeing exactly where it is that others may disagree, and considering whose arguments are stronger.

If you have gone to philosophy club and want to take the next step, there are at least three ways to move forward: (1) write a thesis, (2) participate in the undergraduate conference, and (3) work with an undergraduate journal.

Theses

The first step you should take to “make philosophy” is to write a paper that attempts to contribute to philosophical progress. Although you can look up all the logistics of thesis writing on the Rutgers Philosophy Department website, I’ll share some of the harder aspects of it. Namely, (1) picking a topic and (2) securing an advisor.

As a prerequisite to finding the issue that shakes you to your very core, take a well-balanced set of courses. I’d recommend every philosophy major take at leastan epistemology, a metaphysics, and an ethics course. While in those classes, consider which of the debates you enjoy the most: perhaps you like the back-and-forthedness of the Gettier counterexample literature, the fundamentality of metaphysics, or a particular moral issue.

When you think you have a candidate for something you could write deeply about, jump up to the 400-level course with a tenured faculty member in that field, and go to office hours to talk over your papers for the course. Rutgers faculty are encouraging and exciting to work with, but you will need to reach out first.

Should you do well on a paper, ask the professor if they would consider working with you to develop your writing into an honors thesis. This would also be a great time to start discussing graduate school and letters of recommendation, should you be interested.

Conferences

One place to take your completed thesis is to an undergraduate philosophy conference, where you will present it to a national audience. A Google search will yield calls for papers all across the country, as more universities begin hosting such conferences. Should your work be accepted, you’ll take on the job of the visitors to the Philosophy Club: you’ll start by presenting your research, which will be followed by a question-and-answer session. This is an amazing opportunity to hone the skills you’ll need to be a professional philosopher. Namely, articulating your views to an audience of your peers.

Rutgers and Princeton are among the universities hosting undergraduate conferences, as the first annual jointly-held philosophy conference was organized by Rutgers’ own Jimmy Goodrich and Princeton’s Max Siegal. Students from NYU, McGill University, Brown University, and many more came to Princeton to give their selected paper in the form of a presentation. The keynote presentation was given by Rutgers’ Prof. Stephen Stich and Princeton’s Prof. Michael Smith on the role of intuitions in philosophy. It was a stimulating two-day event that will happen again in the Spring of 2015.

Journals

Another avenue you can take your thesis to is that of undergraduate journals. The role of a journal is to select and edit philosophical work for publication, to be read by a peer-group. Just like conferences, journals submit a “call for papers”, which you’ll receive via email, or can find with a Google search. If selected, you’ll likely undergo a couple round of edits and eventually receive a published copy of your work!

At Rutgers, the undergraduate journal is called Arête. I recommend that you submit your paper to other university’s journals, and opt to join Arête as an editor. I recommend this for two reasons: (1) you cannot join another university’s journal, and (2) it raises editorial concerns to both edit and publish your own work. To join Arête, you’ll need a special permission number from the Editor-in-Chief, which you can get from a couple of emails. The undergraduate journal at Rutgers will give you another set of skills you’ll need to go on in philosophy: to read, interpret, and constructively criticize the work of your peers.

The roadmap, completed

To begin practicing philosophy at Rutgers, attend a few sessions of the Philosophy Club. If the issues at stake excite you, take a few classes and find your favorite topic. After that, the philosophy major at Rutgers is the most rewarding experience I’ve had: thinking deeply with the help of the world’s best philosophers, submitting and participating in conferences of like-minded peers, and in turn considering their work. With hard work, these steps will turn you into an aspiring philosopher.

Philosophy's Not Dead

The Wave Function, Breakfast Cereal, and Philosophy of X

Philosophy is the oldest study in the world, arguably beginning when Plato established the Academy in 428 BCE. Simultaneously, it is arguably now the most disparaged, where every few months a leading scientist will claim philosophy is dead or metaphysics is fairy-laden. There are at least two ways to respond to this: (1) to defend philosophy on the scientist’s own stomping ground, citing examples of progress within the scope, and (2) to justify philosophy on its own merit, to defend the goals of philosophy.

The purpose of this article is to appeal to the scientifically-minded to embrace philosophy because of both its contribution within scientific domains and inquiry and on its own merit.

What is Philosophy?

What is the scope of philosophy? It’s very clear that psychologists study people, biologists study life, physicists study energy, etc. So what on earth does the world’s oldest study actually study? A popular answer is that philosophy studies philosophers, but this just pushes the bump in the carpet. Another popular method of working this out is to look at the word “philosophy” and to see its meaning, which is “love of wisdom.” Unfortunately, this is too cryptic and still just pushes the bump in the carpet. Perhaps a look into the hard-and-fast divisions of the subfields of analytic philosophy will help. They are:

  1. Logic, “What is truth and how does it work?”;
  2. Metaphysics, “What is real?”;
  3. Epistemology, “What is knowledge?”;
  4. Ethics, “What is good to do?”;
  5. Politics, “What is justice?”;
  6. Aesthetics, “What is beauty?”

But this answer won’t do either, for two reasons. First, I don’t think this is going to impress the scientifically-minded skeptic that philosophy is worthwhile or rigorous. Second, and thankfully, this taxonomy fails to capture where most of the progress has been in philosophy: the “philosophy of Xs.” There is a “philosophy of …” for practically every field, with some of the most prominent being the philosophy of science, thephilosophy of language, and the philosophy of mind.

In this article I’ll map out one such Òphilosophy of XÓ study and appeal to the scientifically-minded skeptic that it is both properly a field of philosophy and is as rigorous and as worthwhile as empirical science. Specifically, I think that there are two intellectual activities at play: (1) the “first-order” observation and hypothesizing about physical phenomena, and (2) the “second-order” interpretation and synthesis of these hypotheses into the broader corpus. In (2) is where I see the scientifically-minded skeptic embracing the practice of philosophy.

Philosophy of Physics: An Open Question

Hypothetically consider an omniscient, but somewhat limited, god at the very beginning of space and time. This god only knows everything about the present moment, but it is indeed everything, including the position and velocity of all particles and the laws which govern them. Whatever laws are is a question for another article.  All we need to think about is what is logically consistent with such an imaginary being.

Is this enough information to determine how the universe will end? If it is not possible to determine the course of the universe like this, are there probabilities? Perhaps there’s a certain probability of a heat death and a certain probability of a big crunch death of the universe. Could this hypothetical being determine these? Are the probabilities somehow “in the world” and observable, or merely just a instrumental frequency count we assign to sufficiently complex phenomena? If this omniscient being can neither determine the course of our universe nor work out with certain probabilities, what is it that binds together frames in space and time? Is it entirely random? Whether or not these are these problems interest you, these are the sorts of questions where I think philosophy can help physics, in the interpretation of these physical observation and hypotheses.

Questions of determinism and indeterminism are of clear philosophical interest, and hinge on the findings of our fundamental physics. If our universe were purely Newtonian  “clockwork universe,” then with enough investigation we could come to predict the end of our universe and our choice of breakfast cereal tomorrow morning. On the other hand, if our best physics has probability or indeterminacy built-in, then the end of our universe and our choice of breakfast cereal may very well be unknowable. This is consequential to our understandings of ourselves and our ability to choose, of what it is to make the good choice, and of what it is that we can in principle come to understand.

We have two questions here: (1) What is? (2) What does that mean? Philosophers are interested, just like physicists, in determining what is. In addition, I hold that philosophy is uniquely the study of how to “glue” what is with our everyday lives. This is where I think philosophy parts ways from pure observation.

Science & Philosophy Side-by-Side

This taxonomy is not at all divisive, it’s just the case that some eager minds wish to carefully observe the world and hypothesize about it (what “is”), while others want to take those hypotheses and make them cohere with the rest of the corpus of human understanding (what does “it” mean?), and those categories are not at all mutually exclusive. This distinction is between a straightforward hypothesis of the physical world and a picture, understanding, or conception of experienced world. The question of what the world is actually like will overlap with the study of metaphysics, specifically, it is exactly the study of ontology, which asks, “What is being?” The question of how and whether we can come to know what the world is like is an epistemological question. Whether or not there is a pre-determined end to our actions should inform our intuitions about ethics, like can you blame someone for an action they were pre-determined to perform? As such, the question trickles into how to organize our society in the fairest way. Furthermore, I take this to be how the hardcore physicists’ own domain of inquiry will contribute to the classic questions of philosophy.

The determinism/indeterminism debate above is directly observed in quantum phenomena, where our best hypotheses use a wave function to describe the state of particles. However, I claim that nothing in the fundamental physics of our world could in principle answer questions like (2).  That’s one of the domains of philosophy, to reconcile the latest and greatest discoveries with other, more ordinary observations and ourstrongest intuitions. Nothing about the way the world is is going to tell about how we should act, for instance. No fact that’s come to be known will tell us whether we can know that fact. Some of our best theories of physics hold we should abandon our everyday notions of space and time, of simultaneity, of color, of mind. My use of philosophy is to reconcile this objective study of the world with what it’s like to be human.

Presentation on "Humean Supervenience Debugged"

This is a video of a presentation of David Lewis’ 1994 “Humean Supervenience Debugged”, where he deals with the “big bad bug” of chance while holding a Humean Supervenience thesis along with a best-systems account of laws. I made this to practice giving the presentation, which was for Professor Barry Loewer’s metaphysics class.

The First Princeton-Rutgers Undergraduate Philosophy Conference

Last week, undergraduate philosophers from across North America made the pilgrimage to Princeton for the first annual undergraduate philosophy conference held as a partnership between Princeton University and Rutgers University.

The keynote of the event was a keynote conversation between Stephen Stich of Rutgers University and Michael Smith of Princeton University. They presented their take on the inadmissibility and indispensability of intuitions in philosophical reasoning, respectively.

The conference was organized by two Seniors in undergraduate philosophy, Max Siegel of Princeton and Jimmy Goodrich of Rutgers. For my small part, I created the website, which you can find here.

What happens at philosophy conferences?

Philosophy is a field which is practiced in many different ways and places: Aristotle’s famous Lyceum was a grove and gymnasium, Sartre and Camus preferred to hang out at coffee shops like Cafe de Flore, and Nietzsche liked thinking on long walks through wilderness. What are professional philosophers up to now? How do you even practice philosophy?

Philosophy doesn’t have the kind of evidence and mathematics people are familiar with in the physical sciences.  Instead, we look at the nature of concepts and logic, and we manipulate our intuitions in order to study these things.   Roughly, what this translates into at philosophy conferences is that presentations are given which reflect a philosopher’s latest research, a formal argument for a view or its rejection.

For example, Liz Jackson of Kansas State University, one of the undergraduate presenters at the conference, argued that a given view about the connection between blameworthiness and belief was inadequate, and she offered her own fix for the inadequacy. The way this worked was she crafted a counterexample to the existing view about the link between belief and blameworthiness, which was both supported by our intuitions and by her motivating reasons to reject the view.  Her solution, she argued, was more consistent with our strongest and more common intuitions.

Another activity that happens at philosophy conferences is perhaps, if I may, a bit more exciting than presentations: debate! It happens quite a bit in the public and private spheres, on TV and Facebook and in court. Philosophers are no different. At PRUPC the keynote was a debate between Stephen Stich, presenting his case against the use of intuitions in philosophy, with Michael Smith, presenting his case forthe use of intuitions in philosophy.

What did the undergraduates have to offer?

The presentations were varied and compelling, ranging from the topics of ethics to philosophy of math to assertion. Beginning with some epistemology, Gabriel Lariviere came all the way from McGill University offer some insight about the knowledge norm of assertion. In the same section, Liz Jackson of Kansas State University presented her work on the connection between believing and being blameworthy. Both received comments and criticism from Rodrigo Borges of Rutgers University.

Hailing from Orange County, California, Ryan Schering of Chapman University presented his work, “A Rejection of the Metacoherence Requirement.” Zech Blaesi of New York Universitychallenged Richard Joyce’s argument for moral error theory in his“Myths of Morality” presentation. Both received comments from Georgi Gardiner of Rutgers University.

Ethan Perets of Columbia University presented “Prospective Memory and Determination of the Subject Referent.” Isaac Neely of University of Texas at Austin presented “Hume’s Labyrinth: Hume and the Self.” Both received comments from Simon Cullenof Princeton University.

Philip Bold of Brown University responded to a variant of the Benacerraf-Field problem for Mathematical Platonism in his “Would Reliability in Arithmetic Be Striking?”. Helen Zhao of Johns Hopkins University presented her work on Aristotle in her “On Our Knowledge of Primary Substances.” Both received comments from Yoaav Isaacs of Princeton University.

What’s the debate about philosophical intuitions?

Philosophers will often use wild hypotheticals to appeal to our intuitions about a topic: for instance, the trolley cases in ethics or the Gettier cases in epistemology. Intuitionism is, roughly, the view that our intuitions about a domain are useful or true or justified or similar, with most forms of Intuitionism being a combination of these. For instance, moral intuitionism is a view about the epistemology of morality, or the study how we come to know about ethics.  It holds that when we intuit that some action is wrong or perhaps that some end is valuable, we are at least prima facie justified in believing it or asserting it.  Intuitionist views don’t claim that every intuition is correct, but more reasonably that if we carefully consider, compare, and systematize our intuitions, then we can get the truth.

Professor Stephen Stich presented his argument against the use of intuitions in philosophy on grounds that intuitions about the same cases differ based on many effects that should have nothing to do with the truth. For instance, some of his work, among many others, shows that if we change the order of the presentation of various trolley cases, the respondents will change their intuitions. Furthermore, respondents from different cultures and ethnicities had different intuitions about different cases in ethics, philosophy of mind, epistemology, and other fields. For example, east Asian culture were more likely than to ascribe knowledge in Gettier cases than others polled. Of course, the order we hear about the cases or what culture we hail from should not matter if our intuitions are justified and reflect something about each case individually.

On the other side of the debate, Michael Smith defended the use of intuitions in philosophy on grounds that it can do so much for philosophical reasoning. His way of showing this began with Descartes’ cogito, covered here a few months ago. His claim was that in intuitively investigating the statement “I think therefore I am”, all major problems in philosophy could be derived.

  1. If you are thinking, you can posit that other people are too, and you get the problem of other minds in philosophy of mind.
  2. If you are, you can ask what it is to be and what’s the nature of being, which is the role of metaphysics.
  3. You can think about “what should I think, which isepistemology.
  4. “I am, what should I do?” is ethics, and so on.

After the two professors presented their case and responded to some questions, they both sat at the front together to discuss the topics one-to-one. For developing and eager undergraduates like myself, Jimmy, Max, and all who attended, to see these two brilliant philosophers casually discuss so exciting a topic was thrilling.

A review of the debate

Professor Stich’s research into the variability of intuitions is verydamning for the person who wants to defend their use. It’s really important to myself and many others that philosophy is effective at the truth, in whatever way it might do so. If intuitions vary with culture or society, and we’re systematically using these intuitions to guide philosophical reasoning, then philosophy varies with culture and society. But! Truth doesn’t vary with culture or society, so how can philosophy be getting at any truth?

Professor Smith’s investigation, on the other hand, into what’s possible with intuitions appealed to the beauty of armchair philosophy that continues to motivate my studies. The way that he showed how many different questions and answers could be accessed a priori since Descartes’ unshakable pillar was ingenious and, to me, convincing.

Despite this, the armchair philosopher should be worried of the skepticism that results from the inadmissibility of intuitions in philosophy. But I think moving from the variability of intuitions to the utter inadmissibility of intuitions is too hasty.  When something is intuitive in contexts other than philosophy, it’s taken to be a virtue, but only on first glance. It is intuitive that every even number is divisible by two, and this is a good thing for someone that makes such a claim.  However, mathematicians are going to need a proof. It is unintuitive that the fundamental nature of our world has anything to do with n-dimensional strings (or akin), but sufficient evidence will move me to adopt such a belief.

I propose philosophers treat intuitions in this sense. If you are working on your theory about knowledge or mind or value and your argument uses intuition or your theory is intuitive, you should take this as good prima facie reason to adopt your view. This is going to be similarly good evidence for people that share the intuition, but be prepared, almost as a sociological fact about philosophers: someone will not share the intuition. And for them, we need to have prepared an argument devoid of that intuition.

What do you think?

What do you think is the status of your own intuitions? Surely, you find it quite intuitive that “murder is wrong,”“2 + 2 = 4,” or things even more basic such as the Law of Non-Contradiction? And things similarly quite counter-intuitive, like that our world rests on the back of infinitely many massive turtles.

So the question is, do you think these intuitions are sufficient evidence to conclude anything about the nature of morality, math, or the Universe? If you do, then how can you account for people with intuitions that are different? If you don’t, how to do you explain or justify our daily reliance on them?  And regardless, how do you account for that persuasive and powerful feeling of truth towards intuitive claims?

How Do Philosophers Use Logic?

One of philosopher’s favorite activities is distinguishing between things. Where we have some concept like “moral action” or “beautiful objects”, we investigate their nature by distinguishing kinds of moral action or types of beautiful objects.

In everyday conversation, one of the best and most often practiced ways of making progress in a field or an activity is to reason, that is to assert something and then make other statementswhich in some way support it. In the context of Humanistic or religious debate, when a person commits themselves to a belief about humanity’s purpose or the meaning of existence, giving reasons which structurally support the claim is an effective way of persuading another or exploring one’s own views.

The Set Up

These structures, in a philosophical context, are called arguments, and not only do they have a logical structure but they also have components, which are expressions of what philosophers callpropositions. Consider the following, and perhaps familiar, argument:

  1. If the building-blocks of life are irreducibly complex, then there is a Creator.
  2. The building-blocks of life are irreducibly complex.
  3. Therefore, there is a Creator.

Philosophers have given formal names and definitions to the structure and components to this argument, which they also take to be distinguishable. To tease out one from the other, it’s helpful to replace the string “the building blocks of life are irreducibly complex” with the some symbol, for instance p, and “there is a Creator” with another shorter symbol, say m. This yields the following,

  1. If p, then m.
  2. p.
  3. Therefore, m.

Now this gets very interesting: If you fill in the p and m symbols with a different sentence – any sentence! – you’ll find that even though the argument is completely logical and valid you may or may not agree with the reasoning. But how could this be if all these forms of reasoning follow the same structure? For instance, consider the following replacement of p and m:

  1. If the year is 2014, then the President of the United States is Barack Obama.
  2. The year is 2014.
  3. Therefore, the President of the United States is Barack Obama.

Imagine a person, perhaps it’s even yourself, that wants to grant or accept the truth of the reasoning in the Presidential argument but not in the Creator argument. What tools does the person have for rejecting one but not the other if they follow the same structure? They both seem to contain the same method of reasoning, but one must be making someother mistake!

Soundness and Validity

This is where philosophers distinguish valid from sound, which you’ve probably encountered already in day to day conversation.

Soundness is a relatively easy to understand. An argument is said to be sound only when it’s premises are actually true. In other words, the following argument is not sound:

  1. If Bertrand Russel is alive, then the Internet is pink.
  2. Bertrand Russel is alive.
  3. Therefore, the Internet is pink.

What? That’s exactly right: This argument is not sound. Beginning with (1), a reasonable person will deny that Bertrand Russel’s living presence has nothing to do with the color of the Internet, which is an absurd idea in itself. Of (2), a gloss of his Wikipedia page will show that Bertrand Russel has been deceased for some time. Because (1) and (2) are not true, using them to get (3) is no good.  Now some day we might find, against all perceptions to the contrary, that the internet has color and is pink, but it’s not pink because of (1) and (2).

Validity is somewhat more tricky to understand. Validity is what applies to what I’ve been calling the “structure” of argument. Validity doesn’t have to do with the truth of the premises, but the structure or form of the argument. An argument is said to be valid only if the premises are true and the conclusion must also be true. Notice of our absurd Colored Internet reasoning that if it were the case that Bertrand Russel’s being alive really did affect whatever the color of the Internet is, and also if Bertrand Russel really was alive, that we’d be forced to accept that (3), the “Internet is pink.”  If premises (1) and (2) are true, then (3), i.e. the conclusion, must also be true.

To really reinforce this notion, go back to the symbols of p and m. Construct an “if-then” statement you believe is true with p and m, and notice that when the “if-then” is true and the “if clause” is true, you feel compelled to accept m.

Now back to our imaginary person who wants to reject the Creator argument above and accept the Presidential argument. With this distinction they can now say of both arguments that they are valid but only the Presidential argument is sound. We can see two things in the Creator argument. Of (1), they could say that irreducible complexity can arise in other ways than by a Creator or Designer. Or of (2) they might say that there is nothing which is irreducibly complex. Either of these empirical or theoretical observations would show the argument isn’t sound. And because the Creator argument isn’t sound, our reasoner does not accept it, even in the face of its validity.

An Open Question

What I find very interesting question about this distinction is where does validity “come from?” What I mean is, it is very obvious to me and everyone I’ve encountered that modus ponens (the name given to the structure of the arguments in this piece) is a “good” way of reasoning, but where does this goodness come from? Is the source of its goodness some deep truth about the nature of the universe which we all have access to? Or is the source its repeated success in our repeated use? In other words, is it just that it works?

Another way philosophers pose this question is, is valid reasoningnormative? Do we form norms about ways of reasoning and impose them on ourselves and others? Or have we accessed the truth about reasoning? One  fact relevant to this question is that you can construct a “truth table” with this form of reasoning and show that in every case of soundness you get a true conclusion. It is objectively demonstrable, then you can investigate it yourself.

But objectivity and “deep truths” aren’t the same. For instance, it’s objectively demonstrable that Rutgers University educates many people, but this isn’t a “deep truth about the Universe”.

What do you think? Is valid reasoning fundamentally true in some way? Or do we construct for ourselves norms which are “just reliable” or something akin? Is this question even sensible to ask?

Rutgers Radio is Emancipating Electronic Music

Walk a few blocks behind the Rutgers Student Center on the right nights and you’ll notice there’s something in the air. Of course, there’s the smell of a nearby pizzeria, but that’s not quite what it is. Perhaps there’s even the faint smog we’ve become accustomed to, but that’s also not what I’m talking about. Notice instead that low, muffled, repeating thud coming from … everywhere. The youth of every generation since the rise of the middle class have had their dance music: their swing, their soul, their twist and shout, their disco. From disco’s use of the synthesizer came the seed of today’s dance music, and the rise of the personal computer and digital audio workspaces provided the necessary fuel. We have electronic dance music.

Alternatively, tune-in to 90.3 The Core FM on Sundays beginning at 4:00PM and you’ll hear a much clearer and more coherent sound, beginning with SQUO, followed by DJ Soma with Straight to the Hard Drive, then with Lauren Jefferson with Eclecticism, and ending with DJ Psy with Electronic Phonix. This style of electronic music is not necessarily of the variety of the Netherland’s top 40 house hits, but a sound that’s much more home-grown and personal.

With his show, SQUO retains that hard-hitting urge to dance from the music of house parties and nightclubs while adding an underground, undiscovered, up-and-comingness into his mix; the sound is unapologetically electronic. His style is not the polite bass of electronic music has become known for, but rather the grittier, more soulful bass of artists like Branchez, Trippy Turtles, and Victor Niglio. When not DJing for The Core, SQUO is brewing his own entries to the electronic music scene. “I’d love to be able to reach out to more budding artists”, says SQUO, who like the rest of the station, is intensely committed to the community over the commercial, “I’d like to develop a mixshow of my own, bringing on unknown DJs, featuring their mixes, and having a discussion about their music.” You can learn more about SQUO on his website, DJSQUO.com.

This emerging, soulful variety of electronic artists is sponsored and broadcast by The Core FM, a purely student-run organization. Coming to your radio dial at 90.3 FM and from their website thecore.fm, The Core FM is available to the New Brunswick area and beyond. “I am about inclusion, not exclusion”, says the General Manager Josh Kelly. With both a New Brunswick community focus and World Wide Web prescence is consistent with the The General Manager’s commitment to a broad and diverse audience and Rutgers’ slogan, “Local Roots, Global Reach.” Josh continues, “We try to keep commercialization out of the equation, we want to find and showcase music that is for kids by kids, not by a big budget producer.”

The show that immediately follows SQUO also exemplifies this attitude, with DJ Soma’s show, Straight to the Hard Drive. DJ Soma resists the commercialization of his music in both the sense that it isn’t from a massive label, but also that the music is free and legal, with links on DJ Soma’s blog, dj-soma.tumblr.com. “The beauty of electronic music too is the fact that almost anybody with access to the technology can create something unique, and distribute it to a possible audience of millions.” says DJ Soma. His timbre cadences SQUO’s well, with ambient soundscapes, gritty drums, and psychedelic synths. The electronic music’s soul emerges from guitar riffs, turntablism, and occasionally samples from music, movies, and radio circa 40’s and 50’s.

“I am really into keeping my ear to the streets”, says The Core General Manager Josh Kelly, who is keeping his focus on the community aspect of The Core. Not limiting themselves to a single medium to share music, The Core FM recently worked with New Brunsiwck to organize a free and public show in Boyd Park that showcased local talent. This is The Core’s categorical attitude, as Josh says, “If it takes lots of money to get somewhere, then it isn’t part of a fair system, and people are systematically left out in the cold in terms of participating.”

Lauren Jefferson continues electronic music on Sundays with her show Eclecticism at 8:00. As the name suggests, her style is much more varied, retaining the electronic timbre and dance-inspiration while producing excitement from novelty. Lauren is deeply committed to bringing her listeners both a high-quality and novel listening experience, “It hurts when I hear people say they can never find good new music.”, says Lauren. On some occasions, Ecclectism’s timbre can be downtempo and soulful with tracks from labels like, for example, Ghostly International. Alternatively, the show can take on an entirely more catchy and upbeat vibe with offerings from French label Kitsuné. You can keep up with Lauren on her blog, eclecticism1.blogspot.com.

Concluding electronic music on Sundays is DJ Psy with his show, Electronic Phonix with. DJ Psy has been broadcasting since 2008, and his sound has grown with the station, with the genre, and with the technology. Very excited with the state of the genre, “The cost of production equipment is now as cheap as a laptop and some software”, notes Psy, “We’re seeing the biggest growth in a ‘genre’ since amplifiers, guitars, and 4-tracks became commodities, forever changing rock’n’roll.” Electronic Phonics is glittery, shiny, and bassy, with beats that move your feet and featuring artists like The Magician, RAC, Joe Goddard, LCD Soundsystem, Coleco, and Classixx.

The artists at the Core FM are challenging the music industry from two sides, offering their community high-quality and non-commercial radio on one front and then both sourcing and broadcasting their music using the Internet on the other. As more and more people come to have control of the means of electronic music production (computers and audio software), there will be less and less possible or needed involvement from the aptly antiquatedly named record industry. The Internet has emancipated an entire generation of people wanting to express themselves, and The Core FM is a manifestion of that. From Josh Kelly’s point of view, “We want to find and showcase music that is by kids for kids, and not by a big budget producer.

4 Ways to Answer Children That Keep Asking, "Why?"

“Why is the sky blue?” asks a curious child on the drive to school in the morning. The child’s parent, being a worldly person, happens to know the answer. “Blue is scattered more than other colors because it travels as shorter, smaller waves.” responds the parent. “Well why is blue scattered more?”

It’s a classic parenting scenario along with “Are we there yet?”, where a parent answers a question like, “Why is grass green?” and a incessantly curious child craves more with the word, “Why?” In moments of great patience, the child may get three, four, perhaps even five good answers, but when the reasons why approach facts about sub-atomic particles, it becomes increasingly difficult to even answer. The practice of asserting facts and asking for reasons to believe those facts is an affair that epistemologists are interested in. If it were possible to answer all of a child’s questions, what would the series of answers look like? What possibilities are there?

This question is known as the “regress problem” in epistemology. If this conversation needed not end and the mother was all-knowing, would the conversation go on forever? The problem that this presents, and the reason it’s so hard to answer children’s unceasing questions, is that this seems to have no end. If this is the case, how can we ever raise the credibility or warrant of a claim? If reasons never reach some end, some inherent truth, how can justification ever reach our beliefs?

What possible solutions are these to this, and by corollary, how can we satisfyingly answer curious kids? Well, the logical space seems to only have a few options: 1. The reasons end, that there is a foundational reason; 2. The reasons loop back on themselves, that reasons need only be coherent; 3. The reasons go on infinitely, that there is never a “last reason”; 4. We are just forming beliefs arbitrarily.

“If we keep going, we’re going to get something foundational.”

Perhaps we build all of our justified beliefs on a bedrock of unquestionable foundations. This is plausible because there could be a set of reasons which it just does not make sense to question.

For instance, imagine again a conversation between a parent a child, this time, say a father and his daughter. The father notices that that there is a blue smear on the living room wall, and on the basis of this forms the belief that his daughter was painting today. “You were painting today? Can I see your painting?”, he asks. The daughter, having not told her father she painted, wants to know his reason for thinking she painted. He responds, “I see the blue smear on the wall over there.” The daughter, in the mood to investigate the world, asks her father “What is the reason you believe that you see a blue smear on the wall over there?”

The intuition of foundationalism, the theory which posits the end of the regress, is that questions like this, and questions about other foundational beliefs, are not valid questions. The father may be justified in responding, “What do you mean what is my reason for believing that I see a blue smear? I have no reason, I just am be appeared to as if there is a blue smear.”

There are problems for this view, however. For instance, what foundation is there for mathematical knowledge? Is the father’s reasons for believing not that “When I am being appeared to as if something, then that something?” What are the conditions for a foundational belief?

“If we keep going, my coherent reasons may repeat themselves.”

Imagine if in the process of describing to someone why the sky is blue, you at some point gave two separate reasons that both cannot be true. It would be perfectly natural for someone to question how you could hold both of them simultaneously, and you would likely try to resolve the conflict, to make your reasons cohere.

This is the intuition behind the coherentist response to the regress problem, where the structure of justification is such that you will eventually loop back around on reasons. In the genealogy of your justification for any proposition, if the cycle is sufficient large, hold coherentists, then you have knowledge.

The problem that this view faces is that it is a longer form of circular reasoning. Where it seems to be acceptable to assert a proposition, and then when asked for a reason, supply that same proposition. Furthermore, there are plenty of coherent systems which are not true.

“This is just going to go on infinitely.”

The feeling that I get when I had a conversation with a child like this is that it just never stops. There is always another reason for believing something, it seems. For instance, if you say, “It’s twelve o’clock.”, and you’re asked “Why is it twelve o’clock?” Well, it is true that a reason that it is twelve-oclock is that it is not 11:59, it’s also not 11:58, …

There’s certainly an end to my knowledge, there’s probably an end to human capacity, but that doesn’t mean there’s an end to potential reasons for believing any given thing. This is the claim and intuition of infinitism. The problem for this view is that if there’s always another reason to believe something, how can you “hook up” a proposition to the truth? Foundationalism has a bedrock, but infinitism needs to come up with an account for raising the credibility of a proposition without foundations to be make it usable.

“Eventually, I’ll have no reason for believing what I do.”

The troubling aspect of the regress problem is that none of the answers are straightforwardly right, none of them are obvious. Yet if none of them are the right view, if the question of the structure of justification is a valid one, then we necessarily cannot be justified in any of our beliefs.

And this would be especially unsatisfying for inquisitive minds.

Understanding "I think, therefore I am"

“I think, therefore I am.” In 1637, Descartes penned what has become the most oft-quoted catch phrases from epistemology, if not all of philosophy. Compare the phrase to other philosophical catch phrases, like the Golden Rule in ethics or the “We hold all men to be created equal” in politics. In my experience, and it was certainly true of me, there is not the same understanding of what Descartes meant when he wrote the phrase like there is for what the teachers of the Golden Rule meant or what the Founders meant with “We hold all men to be created equal.”

This piece will give the reader a modern epistemological context for understanding what Descartes penned in 1637.

Context

What does it mean when you say you know something? What type of thing can even be known? These are basic questions, yet they will prove to be important. Bring up in your mind a piece of knowledge that you have, something very basic, let’s say, “I am reading a blog post on Applied Sentience.” You see the computer screen in front of you with the webpage open and the blog post front-and-center, and on the basis of this you come to know that you are reading a blog post. This is a perfectly natural use of the word. Therefore, at least a portion of this type of thing that comes to be known can be expressed as a proposition, and it can either true or false.

What can be said of this proposition that you know? If you stopped reading Applied Sentience and yet you, for some odd reason, held on to your knowledge about reading a blog post, would it be appropriate for knowledge? The intuition is a resounding no. Therefore, it seems that it is important that for you to have knowledge with respect to any proposition, it is important that the proposition be true. We retract our claims to knowledge quite a lot with statements like, “I just knew that my team was going to win, but alas, I did not.” It is also absurd to know propositions that cannot be true, like, “It is raining and it is not raining.”

What relationship must we hold with regards to this true proposition? There are an uncountable number of true propositions, but there is something different between any given true proposition and the knowledge we hold. It is exactly that, in fact, it is that we do “hold” the proposition. You believe that you are reading a post on Applied Sentience. If you did not believe it, then it would be odd and inappropriate for you to claim you had knowledge. In fact, claims of this variety are members of Moore’s Paradox.

And so we have a true belief, but is knowledge anything more? There are sorts of true beliefs you could hold, and still fall short of knowledge. For instance, you could accidentally predict the coming lottery by use of chicken bones. It is unlikely, but it is also possible, so it is worth our consideration. Why do chicken bones which yield true beliefs not confer appropriate knowledge? Because they are inappropriate justification. Notice that in our running example, you believe the truth on the grounds that you see a computer screen with a blog post open.

With a few challenges to our intuition, we have come conclusion to what most modern and ancient epistemologists think is at least sufficient for knowledge: justified true belief.

Significance

Descartes knew of this formulation of knowledge, but he thought it was unsatisfiable with regards to the typical cases we would call knowledge. Descartes would say that you do not have knowledge even of our simple proposition, that you are reading a blog post. What Descartes noticed that made him think this is that you cannot discount the possibility that you are being deceived. The way that Descartes presented this concern is by saying that it is always at least possible that an Evil Demon is causing you to experience a blog post in front of you, but in fact it is just an illusion. Notice that this means that you can have a belief with an appropriate level justification, but Descartes was nervous that despite this, our experiences cause beliefs that are not true.

This is what is called a skeptical argument, as it leads to skepticism where we intuit that we have knowledge. Descartes would say that there is no level of justification which entitles you to think your belief is true with regards to sensory data, as they cannot be infallible. Descartes still wanted to be able to form justified true beliefs about the world, however, and he especially wanted to have knowledge of mathematics, science, and God. His struggle was to find a system which was both infallible against the Evil Demon and possible for human agents. He examined all of his beliefs, searching for any that survived the Evil Demon. He noticed that there was one belief that the Evil Demon could not possibly deceive him of!

“I think, therefore I am.” Applying our intuited standards for knowledge, we see that would be very hard to actually think while not believing that you think. The justification that one has for their thinking is personal to the person alone, which is what makes it immune to the Evil Demon. The truth of one’s existence is something we cannot access, but it would be very hard for us to reject our own existence. This argument applies only to the first-person perspective, as Descartes would say that there is no way for you to know of the thoughts of others without the Evil Demon interfering.

It is on this foundation that Descartes restored his belief in science, mathematics, and God. Descartes wanted the same obviousness and necessity in his other beliefs as he had with his “I think, therefore I am.”

Is it true?

Does knowledge require infallibility? Do you need to be able to discount all of the possibilities to know that the sun will rise, that you have a birthday, or that the White House exists? Descartes may have thought so, but with this initial exposition into the analysis of knowledge, you are more armed than ever to find your own answers to these questions.

What Infinitism is Contingent On

Infinitism with regards to the depth of graph representing the structure of justification, whether foundational or coherent, whether propositional or doxastic, will be false unless the subject of the content of the belief has the feature of being both infinitely small and infinitely large in some way. This rule of infinitism gets it right for the oft example of believing that “it is 12:00” (or any time). The infinitist points out that there are propositionally infinitely many justifiers. This rule of infinitism gets it right for every case where the infinitist has no response, for example, believing that “I am in pain.” The infinitist cannot respond to that example because it is not necessarily true that the reality of the agent is both infinitely small and infinitely large. It could be the case, but the sciences have still to be agnostic about our reality. In the case of doxastic justification, because the machinery of the human mind is finite in function, it is implausible we hold infinitely many beliefs about anything. In the case of propositional justification, infinitism is only possible when the subject of the belief is at least infinitely large or infinitely divisible.

Notice there could be infinite propositional justifiers for a belief in depth or in breadth, where depth is the weight of the graph and breadth the number of vertices pointing at a proposition. If reality is temporally infinite but not spatially infinite, this would mean that the depth of propositional justifiers would be infinite. If reality is spatially infinite by not temporally infinite, it would mean that the breadth of propositional justifiers could be infinite but the depth would not be (as time “began”). In the case where reality is both temporally and spatially infinite, propositional justification would be infinite in both breadth and depth. In the case where reality is neither temporally or spatially infinite, or is locally limited or isolated in some way, infinitism with regards to propositional justification will be false.

Understanding Epistemology

The word “epistemology”, I believe, looks pretty scary to those unfamiliar with it. Contrast this with “ethics”, another traditional branch of philosophy, where the word is not scary and many, most, perhaps all, people have experience with a way of “doing the right thing.” Understanding ethical issues and dilemmas, in my experience, is more common for people than understanding epistemological issues and dilemmas. For example, it is more likely that a non-philosophically trained person would have an opinion on the ethics of Syrian intervention than an opinion on what newscasters and politicians are justified in believing. The more I learn about the definitions and distinctions of epistemology, the more I think that raising the public understanding of epistemology to the same level as ethics would benefit the public discourse. I will discus how I came to the topic as well as explaining what I learn in an approachable way.

My Background

Since this summer semester, the bulk of what I study at Rutgers has been epistemology, beginning with the course “Theory of Knowledge.” If you look up epistemology in the dictionary, you’ll likely find the definition contains the phrase, “theory of knowledge”, and I learnt that this is because the faculty thought “epistemology” would scare off students with little or not background. By a combination of accident and preference, I have taken the undergraduate introductory epistemology course and I am taking the undergraduate 400-level and the graduate introduction. I like this a lot, I believe this combination will adequately prepare me to interface with problems in epistemology. I’m very excited that my experience has already allowed me to begin reading publications which would have previously baffled me; I am working through Laurence Sosa and Ernest Sosa’s Epistemic Justification.

The Regress Problem

Here is an example of a epistemological problem that I will show that everyone would benefit from understanding.

Think about your beliefs. Pick any one of your beliefs. You very likely have related beliefs, beliefs that come about from this first one and beliefs that justify this first one. For example, if you picked “Obama is the President of the United States”, you might notice that this belief justifies your other belief that “Barack Obama is the first African-American president” and stems from other beliefs like “The United States has a president” and “The United States is a democracy.” Your beliefs have relationships!

In one direction, you have increasingly higher-order beliefs. In the other direction, you have increasingly basic beliefs. Notice that more basic beliefs are related in some way to higher beliefs in a way that justifies them. It would be hard for you to believe that “The United States has no president” and from this, and other beliefs, think that “Barack Obama is the first African-American president.” If you keep tracing how your beliefs are related in the basic direction, what is that you find? Does it end? Does it loop back on itself? Is it infinite?

This question is known as “the regress problem.”

Possible Solutions

I’ll present it more formally. Imagine a graph where every node represents a doxastic or propositional belief held by a single person. The directed vertices between the nodes represent a justification-relationship, where $N_1$ has a vertex pointed at $N_2$ when $N_2$ is justified in some way by $N_1$. When moving from $N_1$ to $N_2$, I say the belief becomes “more basic.” There are three logical possibilities about the structure of such a graph. As you move to increasingly more basic beliefs:

  1. You can reach a set of “most basic beliefs,” philosophers call this foundationalism;
  2. Justification is cyclic, meaning “basic beliefs” can be cyclic, philosophers call this coherentism;
  3. You never reach a set of “most basic beliefs,” justification runs infinitely, philosophers call this infinitism.

Why This Matters

How does this fit in how people talk about belief and justification? Take the issue of Syria. On the topic, it is very common to hear a liberal say of FOX News things like,

FOX New opposes the Syria peace plan because it makes America under Obama seem weak. They’re biased! This fits with their conservative viewpoint.

What that sounds like to me is that our liberal is accusing some “epistemic community” (a piece of jargon to mean institution) has formed a belief which confirms a bias, fits an agenda. I think this is equivalent to saying that FOX News has formed a belief with bad justification. Perhaps the set is locally coherent and contains some truths, but this person’s main problem is that the justification for their opinion doesn’t contain all the facts or similar. This is one example of when being able to understand the epistemological issues of a story equally well as the ethical issues of story, being able to intelligibly answer “Was it moral to make peace with Syria?” and “Am I justified in forming this belief about peace with Syria?” Just as it’s valuable to use ethical words like “rights” and “happiness” (for utilitarians) in public discourse, it’s valuable to use epistemological words like “coherence” and “foundational belief.”

Consequences and Motivation

Being satisfied with good outcomes is a bad way of keeping oneself motivated. Satisfaction is a reward, it feels good, it is the very meaning of the word. Good outcomes feel good, otherwise I could not use the word good to describe them. So why should you not feel good about a good outcome?

Lets look at concrete examples and see if we can derive a general truth.

Fitness and health

This is an endeavor in which there are a lot of people which are very dissatisfied. I think this because of the sheer bulk of advertisements for gyms, diets, equipment, and supplements.

A good outcome in fitness and health is something like getting the body you want, lifting the amount of weight you want, or having the clear skin you want. These vary in difficulty, but those fitness and health goals that are especially valued by our culture are those which are hard, part of why we value them is because they are hard and not many people can acheive them.

Think about those advertisements and their content, the message they broadcast. These products are easy to use, work quickly, and “you will see results.” A gym equipment infomercial, in my experience, does not say that its product requires commitment, hard work, and will lead to countless failures.

And yet that is exactly what it is.

If you want to achieve a difficult goal in fitness and health, I find it likely that you should seperate your satisfaction from outcomes. You should not be satisfied when you look in the mirror and see the traits you want to have, you should attach satisfaction to the types of activities which give rise to having the traits you want.

If you find the process of becoming healthier and more fit rewarding, the intuitive psychological conclusion I draw is that you are then more likely to engage in the process more often, and thus, get the conclusion you want.

That’s right. In order to get something you want, you must first stop yourself from wanting it for the sake of getting it.

Productivity and programming

One of the many reasons I came to like computer programming is for the reward circuit. The first program you write is mysterious and you do not really understand what is at play, but your teacher guides you, you hit run, and sure enough, on the screen you see the words

Hello World!

And you think, I did that. And you find other things to do, more complicated, and by extension, more rewarding. But it takes more time to do, and more mental power, and you make more mistakes. There are an increasing number of times that you do not get that satisfaction like that first program, when you hit run and you get catastrophic failure and you do not know WHY.

So you hack away, running through every instruction set of your large, complicated program, and you see some errors and you fix them. You have less and less errors being printed out, and you are hunting down bugs when all of sudden you hit run and

Hello World!

It works. Your brain’s reward centers light up. You feel very satisfied.

I think this is young man’s style of programming. It is one of which encourages hacking together a set of instructions to see if it works, and then hitting run to get your fix.

The more desirable alternative is that you come to be satisfied with the process of programming. The really interesting problems are solved and understanding computer science happens by attaching your satisfaction to planning in advance your program’s structure, to reading about new concepts, and to coding every day.

The problems worth solving are likely not done in 24 hours binges, but instead, in consistently applying and honing your skillset. It is not that it is impossible, people do amazing work at hackathons all the time.

But being able to do amazing work at a hackathon comes from the process. The laudible outcomes in computer programming do not come from a series of binges, but in the process of programming, of taking your classes seriously, of learning something every day, of writing something every day.

What does this mean?

Self-improvement is not an outcome. There are outcomes that are in self-improvement, but self-improvement is a process. To be satisfied with an outcome in self-improvement is to separate the reward from the activity which gives rise to the outcome.

Those activities that are hard and worthwhile are going to make you fail. To be satified with outcomes is going to lead to a frustrating existence and make you more likely to give up. Love the process and find satisfaction in it, and you will be able to steadfastly endure failure after failure, which is the only way to meaningful success.

Consciousness and Assembly

The instructions that your computer’s central processing unit (CPU, the “brain” of your computer) uses to accomplish what you ask it to might be revealing about how your flesh and blood brain work.

How I came to the idea

I have started to read Jean Paul Sartre’s Being and Nothingness, and it is proving very rewarding. I chose to begin reading it for two reasons:

  1. I spend a lot of my time coding for class and for myself, but I only ever read for philosophy class, so I’m doing some outside of class philosophy work; and
  2. Existentialism is famously known to be a humanism, and I get the impression that (and this may be totally wrong) it is an inspirational breaking down of human psychology in this vast, scary Universe probably devoid of meaning or purpose, and I could use this right now.

But I am find myself a little nervous and skeptical while reading it.

Jean Paul Sartre and the existentialist movement place a lot of importance on and argue strongly for free will. Making choices is something central to their understanding of the human mind and reality.

I am convinced they are wrong, and find the arguments for hard determinism to be valid.

Hard determinism is the belief that given all the information about the speed, location, etc, of all the atoms and sub-atomic elements in the whole Universe and a sufficiently sophisticated understanding of how they interact, a mind could predict with 100% certainty what would happen in the future.

What more, but the objections to hard determinism do not lead to free will, but rather that we are only captive to the random tendencies of the matter which our mind is composed of. When Sartre is deep into explaining how anguish is the human emotional response that is generated when we notice the possibility of our non-existence and/or the non-realization of some desirable goal, and our continued choice to be and to do is to escape this anguish, I find myself both inspired and in agreement …

But ultimately disagreeing. And I can explain why with computer science.

Programming, assembly language, and machine code

All of the software you use on any of your digital devices, like your phone’s text messaging application or Microsoft Word, are programs, they are defined by code. It all looks something like this:

a = 5
b = 7
c = b + a
output(c)

This program “outputs” the value 12.

Everything your programs are executed on the CPU using 0s and 1s, this is machine code. It all looks something like this:

0101010001000100100

And at a slightly higher level, these numbers can be defined by hex values, base-16 instead of the base-2 of the 0 and 1 flavor of machine code. It looks like this:

af ff 0a b3 f5

These are all just numbers. Data. ff in hex, for example, is the decimal number $255$. That binary number up top, all of it, is the equal to the decimal number $172,580$.

Wait! How does what I see in Word get defined in “code” that is ultimately made into just … 0s and 1s … How does my computer know what they “mean”?

The way that Word is made with code is with just more complex versions of the first example, where programmers make something like a “document view” program to build what you see, a “document model” defining how it is that your pages can look and how they are stored, and perhaps some sort of in between program that dictates how what you see interacts with what’s stored.

The way that your computer knows what 0s and 1s mean is by design! Clever scientists design a chip with a lot of densely packed on-off switches, and when the switches are turned on and off in a certain way, it triggers other on-off switches to go on or off in a predictable way. Furthermore, all the switches can be observed, so when you flip the switches, you can look at the “answer switches” to determine what happened.

Okay, I just about get it. But … uh, how does code become 0s and 1s?

With assembly code! Assembly is the human-readable intermediary between programming languages and CPU “on-off” instructions. Assembly code is defined by the people that make your CPU, and that definition is used by people that make programming languages.

And now you see how everything links up. You have MS Word, which is made with a high-level programming language. This is converted into assembly code for a given type of CPU. This assembly, in one way or another, is made into zeroes and ones.

Observe that your programs, your MS Word, is eventually just made into numbers, in data. That you programs are composed of the same stuff as the content you type into your Word document – values, content, data. Programs are data.

And this is a model I can use to reject free will.

How this all relates

It is in the common parlance and every introduction to technology textbook that the CPU represents the “brain” of a computer. The simile is a useful one because it actually represents the way that these things are. A CPU is used to make decisions about how your computer should operate in the same way that we have come to believe that the brain is the organ used to make decisions.

But these are never really decisions, they are merely computations.

When writing code, it is an illusion that your program is any way unique from a given set of numbers. It is a useful illusion that allows a programmer to extend the possibilities of useful and meaningful combinations of numbers to be fed to a processor.

I think that likewise, when making decisions, it is an illusion that your “decision” is any different from other types of events. The world-view of the determinist is one of events. Every event is preceded by the necessary prior events and followed by the necessary and inevitable future events. When a rock is triggered to roll down a mountain by a strong wind, the event is composed of a relatively large series of atomic events that let to its happening. With a sufficient understanding of the present condition, it would be possible to predict that the rock would be made to move. A meteorologist, geologist, and physicist could work together to accurately predict the weather, the threshold for moving the rock, etc.

In this metaphor, events are just “values”, just numbers. They represent a state of reality. I predict that as we learn more and more about smaller and smaller particles, that there will be some level at which either there is something or there is nothing. The aggregation of these atomic somethings and nothings forms progressively larger collections of somethings or nothings. Atoms represent a certain collection of somethings and nothings in that there is the “something” we have called the nucleus, the “something” we call the electrons, and there is the “nothing” in between the two groups. Additionally, a nucleus is composed of two types of “somethings”, protons and neutrons. We are finding that those are composed of there own types of “somethings”, which we presently do not know much about, but they are called sub-atomic particles.

I believe it is inevitable that there is some level of somethingness and nothingness which is atomic (not to be confused with atoms, which we now know are not atomic – whoops, Western science jumped the gun with that naming convention). Somethingness and nothingness can be thought of in terms of the binary number system, where one is something and zero is nothing. If you can assign this level zeroes and ones, you can assign progressively larger values to the aggregates of this something or nothingness.

If this is true, then all states of reality, all aggregate compositions of somethingness and nothingness can be assigned a value. Reality is data.

This would have consequences for the brain because it would mean that states of the brain, the configuration of a brain when it is holding a belief or performing an action, can also be reduced to values.

From the perspective of the brain, and admittedly I do not understand this at all rigorously, somethingness can be thought of as a neuron being “on” and nothingness can be represented by the neuron being “off.” In a series of firings of electrical impulses, states of the brain change. When you feel pain, neurons send “somethings” up to brain, and that changes the configuration of the ons and offs in a particular way, and you respond accordingly.

Furthermore, when you are presented with a decision, your brain gathers information about the decision using your perception, your sense organs. This is stored and is accessible. Using some decision making function, you appear to come to some conclusion and you enact it.

States of reality, under this model, are values. States of your brain and the beliefs and processes those states represent, are values. As physicists decode the language of reality and as neuroscientists decode the language of the brain, I predict that we are going to become increasingly aware that the Universe and our brains are complex machines. With this realization it will become increasingly clear that just as code being unique from data is an illusion, it is also an illusion that events in world are unique from states of reality, and it is also an illusion that decision-making is unique from the states of the brain.

This is exciting because it means as we come to understand the machines we make more and more and as they become more sophisticated, it will be revealing about the machine that makes and understands these increasingly sophisticated machines. Furthermore, as we come to understand the machine that we are more and more, we will be able to produce increasingly better reproductions of thinking.

Computer Architecture Lecture

  • What I encourage you to do is to write minituare programs
    • For example, the .size directives, write a program that does nothing but read those strings of bytes, and turn them into a hex number just to show that I can read these numbers and properly convert them.
    • With this, you have a piece of code you can trust and transplant and really make sure it works.
    • This is not a requirement, but it is strongly encouraged
  • Moving on, the flags that affect various Y86 instructions.
            zf  sf  of
    addl    y   y   y
    subl    y   y   y
    amdl    y   y   0
    xori    y   y   0
    mull    y   y   y
    readb   y   0   0
    readw   y   0   0
    

Ethical and Social Decision Making

It’s impossible for me to determine how to act or how to feel, and this is because I’ve come to these conclusions:

  1. Knowledge, the intersection of truth, justification, and belief, is an impossible standard;
  2. Objectivity in morality is an impossible standard;
  3. Free will is unlikely, the study of the brain and the observation of the universe reveal that life is matter just like any other, abiding by predictable laws like any other.

Yet despite accepting these, I continue to act and feel, and it seems like I have a choice in the matter. This is disingenuous or hypocritical or dishonest. Remaining in control of one’s emotions is an ideal I work hard to live up to, yet I don’t believe there could be such a thing as being “in control” and I can’t see how emotions can be anything other than states of the brain. Coming to the right moral conclusion is something I value, but I can’t rationalize there truly being any such thing. How do I deal with this? Because I wake up every day and “make decisions” and try to “do the right thing.”

Knowledge

The reason that knowledge is impossible in the sense of holding a justified true belief is that there is no way of determining what is actually true, but only what you are justified and believe is true.

So, what I do to handle the how knowledge is impossible is throw out the impossible standard. I don’t know anything, but I do have something else, something akin to knowledge. I have justified beliefs, and that’s what I use to make decisions, out of a practical and human need.

Interestingly enough, I learned today that its possible to hold a invalid justified true belief. For instance, say you’re watching a football game, Rutgers vs. Penn State, on television. It is actually true that Rutgers is winning. But, you’ve been duped by the broadcaster, whose playing the game where Rutgers is winning, but it’s from last year. Therefore:

  1. You are justified in thinking Rutgers is winning,
  2. It is actually true that Rutgers is winning,
  3. You believe Rutgers is winning,

But we’re not tempted to say you know it. So, there must be something more to knowledge not captured by justified true belief. So it’s even more hopeless.

So, to live, throw out knowledge, the practical knowledge skeptic.

Objectivity and morality

The reason an objective morality is impossible is because of the is/ought distinction, the naturalistic fallacy, the fact that morality depends on the existence of conscious beings, is not part of reality. Objective morality would require a god, and I don’t believe in god, so I don’t believe there is objective reality. And even if there is, is it good because god says so or does god say so because its good? It’s still either shallow or unnecessary.

Instead, I also just throw out the impossible standard. I must just arbitrarily establish an end to morality, a first principle, and come to conclusions based on it. I think, like Mill, that happiness is the best first principle, the best end to human behavior. Why? Well, descriptively, I think people do act to become happy. And prescriptively, I think that happier states of mind are desirable, and human beings, being the objects of morality, ought value that which is desirable. It’s not objective, but it’s practical and human.

By throwing out two impossible standards, I use justified beliefs and an arbitrary but human first principle, happiness, to “make decisions.”

Determinism and reductionism

So I believe everything is likely predetermined. Not by any divine first mover or benevolent dictator, but instead by simple facts about how reality seems to operate.

  1. All events now are the consequent of prior events,
  2. All future events are the consequent of events now,
  3. Therefore, all events now determine all future events.

This avoids the first event debate, just because it isn’t relevant right here.

Additionally,

  1. I am, I am aware, I think, feel, and have a body,
  2. The processor which informs my body and feels is my brain,
  3. My brain is a physical object, a place where events occur, however complex.

Combine the two arguments and you have my naive deterministic reductionism. My brain is a place where events occur that take place wholly due to past events. Even if you dispute the first set of premises and tell me there are some random events, that still doesn’t give me choice, it’s just I’m hostage to the rolling of atomic dice.

When I deliberate, I accept that I ultimately won’t be able to freely choose my conclusion. I was just intended to deliberate, and if I wasn’t, I wouldn’t. I don’t see why determinism and deliberation are incompatible.

Conclusion

I’m going to write up my beliefs like this so I can explicitly identify what it is I believe, so that I can reflect and grow on it. This is not intended to be educational, rigorous, or groundbreaking, merely honest.