Tuesday, September 9, 2008

JavaMail SMTP Authentication

There are a couple ways to authenticate yourself when using JavaMail APIs to send e-mail through an SMTP server.

Here's one way:


public void sendMail(String fromAddress,
String recipients,
String subject,
String content,
String contentType,
String smtpHost,
int smtpPort,
String username,
String password) {
try {

Properties props = System.getProperties();
Session session = Session.getDefaltInstance(props,null);

MimeMessage message = new MimeMessage(session);

message.setFrom(fromAddress);
message.setRecipients(Message.RecipientType.TO, InternetAddress.parse(recipients, false));

message.setSubject(subject);
message.setContent(content,contentType);
message.setSentDate(new Date());

Transport transport = session.getTransport("smtp");
transport.connect(smtphost,smtpPort,username,password);
transport.sendMessage(message,message.getAllRecipients());
transport.close();

} catch (AddressException e) {
e.printStackTrace();
} catch (MessagingException e) {
e.printStackTrace();
}
}


I don't like that way, because it's no good for anything but sending via an SMTP server with password based authentication. So, if I want to send e-mail through an SMTP server that has, say, a white-listed IP policy so I don't need to provide authentication, I have to write a whole new function that will duplicate a significant part of that code. Also, that long list of arguments to the function is just ugly, but it's what you're stuck with if you abstract a function that's hard-coded for SMTP password authentication.

Here's a better way, that is abstracted nicely:

public void sendMail(Properties props,
Authenticator authenticator,
String fromAddress,
String recipients,
String subject,
String content,
String contentType) {
try {

Session session = Session.getDefaultInstance(props, authenticator);

Message message = new MimeMessage(session);

message.setFrom(fromAddress);
message.setRecipients(Message.RecipientType.TO, InternetAddress.parse(recipients, false));

message.setSubject(subject);
message.setContent(content,contentType);
message.setSentDate(new Date());

Transport.send(message);

} catch (AddressException e) {
e.printStackTrace();
} catch (MessagingException e) {
e.printStackTrace();
}
}


The function above can be used to send e-mail with all sorts of configurations. All you have to do is pass it the Properties and, if you need to, an instance of a class extending javax.mail.Authenticator. Here's an example to send an e-mail identical to the first example above:

Properties props = System.getProperties();
props.put("mail.smtp.host","smtpHost");
props.put("mail.smtp.port","smtpPort");
props.put("mail.smtp.auth","true");

Authenticator auth = new Authenticator() {
public PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication("username","password");
}
};

sendMail(props,auth,"no-reply@mysite.com","user@site.com","Hello World","Wow, big world.","text/plain");


And, to send an e-mail without authentication (as in the IP white-list policy):

Properties props = System.getProperties();
props.put("mail.smtp.host","smtpHost");
props.put("mail.smtp.port","smtpPort");

sendMail(props,null,"no-reply@mysite.com","user@site.com","Hello World","Wow, big world.","text/plain");


The great part is that the function sending the e-mail doesn't need to know anything about how it's going to do it, since the configuration is entirely passed in as arguments via the session and the optional authenticator. There is, however, a catch that I've noticed has tripped people up all over the internet. The values you set to your Properties object must all be of class String. For example, this works

Properties props = System.getProperties();
props.put("mail.smtp.host","mail.mysite.com");
props.put("mail.smtp.port","587");
props.put("mail.smtp.auth","true");

Whereas, this does not:

Properties props = System.getProperties();
props.put("mail.smtp.host","mail.mysite.com");
props.put("mail.smtp.port",587);
props.put("mail.smtp.auth",true);

It doesn't matter that the values for mail.smtp.port and mail.smtp.auth are an int and a boolean, respectively. If they are not specified as a String, Transport.sendMessage(message) will fail.

* As a disclaimer, I typed the above code, rather than copy and paste from a program I've compiled, so I apologize if there's a typo anywhere up there that prevents it from compiling.

Sunday, September 7, 2008

Socket Programming with Flex and Apache Mina (Part 2)

I was responding to narup's comment on my previous post, Socket Programming with Flex and Apache Mina, and my response got so long that I decided it was worth a follow-up post. The previous post focused mainly on the parts involved in socket layer communications and some common pitfalls. This post will focus more on decisions when defining a protocol to use in socket layer communications. narup asked two things:
  1. Whether I used binary or XML sockets, and
  2. Some general clues on the whole architecture
The most important thing is to define your protocol. If you choose a binary protocol, some important decisions to make are,
  • Whether to have a header and, if so, what it will contain
  • Whether a command will have a fixed or variable length and, if variable, how to know the length
  • Whether to have the header indicate the type of command and, if so, how many types you might implement, which will help determine how many bytes you allocate for the command.
I'm a strong believer that it's impossible to anticipate every case you'll ever need, so I like to leave myself room to expand without having to change the protocol. This inclines me toward variable length commands.

Command Length
Reading a fixed length command is easy. If your protocol is 8 bytes of data, you read 8 bytes. Variable length commands, by their nature, are of an unpredictable length. Ergo, you can't just read bytes blindly. Each command must somehow transmit with it an indicator for the size of the command. Some options are terminating with a null byte, a particular sequence of bytes (like an EOF character), or indicating the length before the command.

Terminating with a null byte can be problematic if for some reason you have a null byte in your command. I don't want to rule out that possibility, so I rule out that strategy. I also don't like reading ahead without any idea of how far I'm going, so I'm just not a big fan of terminating with a sequence. This can be especially tricky when your command doesn't show up in its unique entirety, but rather comes in pieces or along with another command. Knowing how far to read by prefacing the command with byte or four indicating the length, however, is clean and simple.

I really like variable length commands because its easily expandable if you decide to 10x the size of the data on which you're operating, but it doesn't require every message to be that 10x size, especially if your data is 1/100th of that when you start out.

Headers
When you have the length prefacing the command, you might as well call it a header. I like headers. They separate operations from data. I like sending a command with a header that indicates what to do and the length of the data upon which to act. Thus, the command body contains only the data. Simple. But now we're back to the length issue again... is our header variable or fixed? I actually like fixed header lengths, because otherwise you end up with the same prefacing or termination issues I just discussed for the body. Since a header is more defined (specific to the protocol than the data) and less likely to undergo volatile changes as your development progresses (unless you change your protocol, in which case this is all out the window), it's easy to define a fixed length header with ample room for growth.

For example, if in all the planning I can only define 10 command types, I'm still likely to allocate a whole byte to specifying the command despite the fact that I could fit my 10 known types into 4 bits with 6 as yet unknown types to spare. The fact of the matter is that 8 bits vs. 4 bits is a miniscule difference but if months later I needed 17 types, I certainly won't want to have to rewrite my encoders/decoders. Same goes for the number of bytes to allocate toward indicating the length of the command; maybe you only need 2 bytes for now, but why not use a full integer just in case. Those 2 additional bytes aren't going to break the bank.

The Right Socket for Your Protocol
This is a bit of a square-peg, round-hole point. It's not usually that definite, but once you've defined your protocol, the decision about how to carry out the transmissions will mainly depend on how much you value human-readability vs. markup overhead, as well as what you're most comfortable with.

I used binary, since the server would be relaying thousands of commands a second and I wanted to be as lightweight as possible. That said, reading and writing byte arrays to process your command is a recipe for obscure bugs you'll spend hours tracking down. By writing protocol encoders/decoders on both the mina and flex sides, and logic testing those independently of the system, I was able to handle the commands in an OO fashion within my programs, but still transmit the command in a super lightweight fashion.

In my case, since I had confirmed the correctness of the encoder/decoder logic, I really didn't care at all about the human readability that XML would have provided. Plus, the overhead of the markup was undesirable both in terms of network transfer and parsing time.

Additional Note
I should probably put a disclaimer that I also prefer sending and receiving JSON over XHR instead of XML, as well as properties files instead of XML config files, haha. I did use XML sockets when writing a Mina-based server to act solely as a policy file server, though, and I could think of cases where it makes sense.

Friday, September 5, 2008

Connecting to EC2 from iSSH on the iPhone

I've been debating buying an SSH app for my iPhone for some time, but wanted to wait until I found one that supported key based authentication so I could connect to EC2 instances.  After careful evaluation, I decided that iSSH looked like the best one to try out, and so I purchased it from the iPhone app store yesterday.

iSSH has a great little feature to generate it's own key and then transfer that key to any machine you want, but it requires that you know the password for that machine.  When it comes to EC2 instances, I've never used a password and to my knowledge, there isn't a user account with a password on them.  Problem.

After posting on the iSSH google group asking if there was going to be a way to transfer a key to iSSH, Chris Jones responded that I might be able to connect if I could associate multiple keys with my AWS account.  I looked into this and discovered that, no, Amazon does not let you associate multiple X.509 certificates with an AWS account.  However, in this process, I realized that there wasn't anything stopping me from adding iSSH's public key to the authorized_keys file on the machine I wanted to connect to.

Here's the steps:
  1. If you don't have one already, install a SSH server on you computer.  I have a macbook pro running OS X 10.5, so all I had to do was enable remote login from the System Preferences, which enables the SSH server under the hood.
  2. Use iSSH's transfer function to transfer the iSSH key to your computer.
  3. Copy the iPhone iSSH key from ~/.ssh/authorized_keys (if you have multiple, it will be indicated with a comment following the key of the type "iphone-rsa-key-<SOME_NUMBER>")
  4. Add the key to the ~/.ssh/authorized_keys file of the EC2 instance you want to connect to.

Viola! Key-based authentication allowing iSSH to connect to an EC2 machine, or any machine for that matter.  If you chose to have iSSH use a password/passphrase when creating its' key, you'll have to enter this whenever you connect, but that's probably a good thing in case you lose your phone...

Thursday, September 4, 2008

Glassfish: Dynamic Reconfiguration

There is an option in the Glassfish server config for "dynamic reconfiguration".   I've stumbled across many a forum post asking,
Why does glassfish not reflect my configuration changes?  I have dynamic-reconfiguration enabled and I see the change reflected in the domain.xml, but the config changes aren't live!
The answer is simple, but not intuitive:  That's not what dynamic-reconfiguration means.  From the engineering document, dynamic-reconfiguration is defined as the  
Ability to be able to control the output of statistical data while the server is running.
Thus, dynamic-reconfiguration does not mean that Glassfish and the JVM are dynamically reconfigurable, but rather that monitoring levels can be changed without having to restart the server.  This especially useful if you want to enable monitoring when your server is bogged down to find out why, but don't want the overhead of monitoring running all the time.

So, in short, love dynamic-reconfiguration for what it is, but don't think it's a catch all for changing anything and seeing the change right away.

Sending Large Amazon SQS Messages with Typica in Java

Here's a real-world case I ran into building JamLegend (http://jamlegend.com)...

Part of JamLegend is in the web-tier and part is in multiple socket-layer servers built on Apache Mina's awesome non-blocking technology. When you have only one of either, communication is easy, but when you have multiple of each, then you really need a system to communicate effectively no matter how many of each, or which particular of each, happen to be running at any given moment in time. Fortunately, Amazon SQS comes to the rescue and, combined with Typica (http://code.google.com/p/typica/) makes it brilliantly simply to distribute jobs amongst any size group of workers.

But SQS has it's own limitation. From the documentation,
Amazon SQS messages can contain up to 8 KB of text data, including XML, JSON and unformatted text.
But that isn't the whole story. More specifically, SQS messages can contain up to 8KB of UTF-16 encoded text data, which makes a big difference in the max character length of your message. So, now we have a constraint to work with, but it's not hard to imagine a case where sending larger messages is quite useful.

Ergo, a code tutorial on chunking SQS messages and some brief discussion of other considerations.

I use Typica, as a mentioned above, which is an awesome SQS utility for Java. Assume for the following that your internal protocol for SQS messages is:
header;content
Given that, chunking a large SQS message into multiple smaller messages is pretty straightforward. First, that header is going to need to accompany all of the messages. Then, we'll repeatedly form messages of the maximum size until we've sent all the data. In the following code, message would be the raw text of the SQS message you wish to send.

Some helpers:

public static final String UTF_16 = "UTF-16";

public static final int SQS_MAX_MESSAGE_SIZE = 8192;


Now onto the logic:

// Connect to the Queue
MessageQueue queue = SQSUtils.connectToQueue(SQS_QUEUE_NAME,AWS_ACCESS_KEY_ID,AWS_SHARED_SECRET);

// if message is small enough to be sent as one message, do it
if (message.getBytes(UTF_16).length <= SQS_MAX_MESSAGE_SIZE) {
queue.sendMessage(message);
} else {

// if it's too big for one message, chunk it and send as multiple

// split the message according to the protocol, but limit it
// in case the content portion contains a semi-colon
String[] parts = message.split(";",2);

// break off the header, this is needed for each chunk
byte[] header = (parts[0] + ";").getBytes(UTF_16);

// get the content as a byte[]
byte[] content = parts[1].getBytes(UTF_16);

// figure out how much content can be in each chunk
int chunkSize = SQS_MAX_MESSAGE_SIZE - header.length;

// create a byte[] for our max message size
// we're going to repeatedly fill this and send the message while
// content remains.
byte[] bytes = new byte[SQS_MAX_MESSAGE_SIZE];

// copy the header into the byte[], we'll only do this once
System.arraycopy(header,0,bytes,0,header.length);

// while there is content left, send a message
for (int i = 0; i < content.length; i += chunkSize) {

// copy the smaller of the remaining bytes or the max chunkSize chunk
// of content into the message array, then send the message. Form the
// message String from the appropriate portion of the array
if (content.length - i < chunkSize) {

System.arraycopy(content,i,bytes,header.length,content.length - i);
message = new String(bytes,0,header.length + content.length - i,UTF_16);

} else {

System.arraycopy(content,i,bytes,header.length,chunkSize);
message = new String(bytes,UTF_16);

}
// send the chunk
queue.sendMessage(message);
}

Since the max size is a measure of UTF-16 bytes, I found it easiest to just deal with everything as a byte[], but you could do it other ways if you felt like it. The bigger thing to note is that this is really only half the solution, since now you face the task of handling chunked SQS messages on the other side. Possible solutions include:

  • Ensuring each message is executable alone, and thus the desired effect is the sum of the executions of each individual message. This has the benefit of not caring which of the workers receive the message, and was the solution I chose to implement.

  • Ensure each message is received by a particular worker, which will then reassemble the messages and execute the whole. For this to work, you really should implement an additional part to the header indicating how many chunks compose a whole and have each chunk identify itself as chunk x of y. That way, the worker will know when it has received the entirety of the message and may execute its task.